WorldWideScience

Sample records for previous models based

  1. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  2. Two new prediction rules for spontaneous pregnancy leading to live birth among subfertile couples, based on the synthesis of three previous models.

    NARCIS (Netherlands)

    C.C. Hunault; J.D.F. Habbema (Dik); M.J.C. Eijkemans (René); J.A. Collins (John); J.L.H. Evers (Johannes); E.R. te Velde (Egbert)

    2004-01-01

    textabstractBACKGROUND: Several models have been published for the prediction of spontaneous pregnancy among subfertile patients. The aim of this study was to broaden the empirical basis for these predictions by making a synthesis of three previously published models. METHODS:

  3. MCNP HPGe detector benchmark with previously validated Cyltran model.

    Science.gov (United States)

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  4. Attribute and topology based change detection in a constellation of previously detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  5. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  7. Vaccinia-based influenza vaccine overcomes previously induced immunodominance hierarchy for heterosubtypic protection.

    Science.gov (United States)

    Kwon, Ji-Sun; Yoon, Jungsoon; Kim, Yeon-Jung; Kang, Kyuho; Woo, Sunje; Jung, Dea-Im; Song, Man Ki; Kim, Eun-Ha; Kwon, Hyeok-Il; Choi, Young Ki; Kim, Jihye; Lee, Jeewon; Yoon, Yeup; Shin, Eui-Cheol; Youn, Jin-Won

    2014-08-01

    Growing concerns about unpredictable influenza pandemics require a broadly protective vaccine against diverse influenza strains. One of the promising approaches was a T cell-based vaccine, but the narrow breadth of T-cell immunity due to the immunodominance hierarchy established by previous influenza infection and efficacy against only mild challenge condition are important hurdles to overcome. To model T-cell immunodominance hierarchy in humans in an experimental setting, influenza-primed C57BL/6 mice were chosen and boosted with a mixture of vaccinia recombinants, individually expressing consensus sequences from avian, swine, and human isolates of influenza internal proteins. As determined by IFN-γ ELISPOT and polyfunctional cytokine secretion, the vaccinia recombinants of influenza expanded the breadth of T-cell responses to include subdominant and even minor epitopes. Vaccine groups were successfully protected against 100 LD50 challenges with PR/8/34 and highly pathogenic avian influenza H5N1, which contained the identical dominant NP366 epitope. Interestingly, in challenge with pandemic A/Cal/04/2009 containing mutations in the dominant epitope, only the group vaccinated with rVV-NP + PA showed improved protection. Taken together, a vaccinia-based influenza vaccine expressing conserved internal proteins improved the breadth of influenza-specific T-cell immunity and provided heterosubtypic protection against immunologically close as well as distant influenza strains. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin

    2015-01-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280

  9. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.

    Science.gov (United States)

    Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J

    2016-01-27

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes

  10. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  11. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  12. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  13. A continuum mechanics-based musculo-mechanical model for esophageal transport

    Science.gov (United States)

    Kou, Wenjun; Griffith, Boyce E.; Pandolfino, John E.; Kahrilas, Peter J.; Patankar, Neelesh A.

    2017-11-01

    In this work, we extend our previous esophageal transport model using an immersed boundary (IB) method with discrete fiber-based structural model, to one using a continuum mechanics-based model that is approximated based on finite elements (IB-FE). To deal with the leakage of flow when the Lagrangian mesh becomes coarser than the fluid mesh, we employ adaptive interaction quadrature points to deal with Lagrangian-Eulerian interaction equations based on a previous work (Griffith and Luo [1]). In particular, we introduce a new anisotropic adaptive interaction quadrature rule. The new rule permits us to vary the interaction quadrature points not only at each time-step and element but also at different orientations per element. This helps to avoid the leakage issue without sacrificing the computational efficiency and accuracy in dealing with the interaction equations. For the material model, we extend our previous fiber-based model to a continuum-based model. We present formulations for general fiber-reinforced material models in the IB-FE framework. The new material model can handle non-linear elasticity and fiber-matrix interactions, and thus permits us to consider more realistic material behavior of biological tissues. To validate our method, we first study a case in which a three-dimensional short tube is dilated. Results on the pressure-displacement relationship and the stress distribution matches very well with those obtained from the implicit FE method. We remark that in our IB-FE case, the three-dimensional tube undergoes a very large deformation and the Lagrangian mesh-size becomes about 6 times of Eulerian mesh-size in the circumferential orientation. To validate the performance of the method in handling fiber-matrix material models, we perform a second study on dilating a long fiber-reinforced tube. Errors are small when we compare numerical solutions with analytical solutions. The technique is then applied to the problem of esophageal transport. We use two

  14. Interactive Coherence-Based Façade Modeling

    KAUST Repository

    Musialski, Przemyslaw

    2012-05-01

    We propose a novel interactive framework for modeling building facades from images. Our method is based on the notion of coherence-based editing which allows exploiting partial symmetries across the facade at any level of detail. The proposed workflow mixes manual interaction with automatic splitting and grouping operations based on unsupervised cluster analysis. In contrast to previous work, our approach leads to detailed 3d geometric models with up to several thousand regions per facade. We compare our modeling scheme to others and evaluate our approach in a user study with an experienced user and several novice users.

  15. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  16. Validation of SWAT+ at field level and comparison with previous SWAT models in simulating hydrologic quantity

    Science.gov (United States)

    GAO, J.; White, M. J.; Bieger, K.; Yen, H.; Arnold, J. G.

    2017-12-01

    Over the past 20 years, the Soil and Water Assessment Tool (SWAT) has been adopted by many researches to assess water quantity and quality in watersheds around the world. As the demand increases in facilitating model support, maintenance, and future development, the SWAT source code and data have undergone major modifications over the past few years. To make the model more flexible in terms of interactions of spatial units and processes occurring in watersheds, a completely revised version of SWAT (SWAT+) was developed to improve SWAT's ability in water resource modelling and management. There are only several applications of SWAT+ in large watersheds, however, no study pays attention to validate the new model at field level and assess its performance. To test the basic hydrologic function of SWAT+, it was implemented in five field cases across five states in the U.S. and compared the SWAT+ created results with that from the previous models at the same fields. Additionally, an automatic calibration tool was used to test which model is easier to be calibrated well in a limited number of parameter adjustments. The goal of the study was to evaluate the performance of SWAT+ in simulating stream flow on field level at different geographical locations. The results demonstrate that SWAT+ demonstrated similar performance with previous SWAT model, but the flexibility offered by SWAT+ via the connection of different spatial objects can result in a more accurate simulation of hydrological processes in spatial, especially for watershed with artificial facilities. Autocalibration shows that SWAT+ is much easier to obtain a satisfied result compared with the previous SWAT. Although many capabilities have already been enhanced in SWAT+, there exist inaccuracies in simulation. This insufficiency will be improved with advancements in scientific knowledge on hydrologic process in specific watersheds. Currently, SWAT+ is prerelease, and any errors are being addressed.

  17. Individual-based modeling of fish: Linking to physical models and water quality.

    Energy Technology Data Exchange (ETDEWEB)

    Rose, K.A.

    1997-08-01

    The individual-based modeling approach for the simulating fish population and community dynamics is gaining popularity. Individual-based modeling has been used in many other fields, such as forest succession and astronomy. The popularity of the individual-based approach is partly a result of the lack of success of the more aggregate modeling approaches traditionally used for simulating fish population and community dynamics. Also, recent recognition that it is often the atypical individual that survives has fostered interest in the individual-based approach. Two general types of individual-based models are distribution and configuration. Distribution models follow the probability distributions of individual characteristics, such as length and age. Configuration models explicitly simulate each individual; the sum over individuals being the population. DeAngelis et al (1992) showed that, when distribution and configuration models were formulated from the same common pool of information, both approaches generated similar predictions. The distribution approach was more compact and general, while the configuration approach was more flexible. Simple biological changes, such as making growth rate dependent on previous days growth rates, were easy to implement in the configuration version but prevented simple analytical solution of the distribution version.

  18. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  19. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  20. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  1. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  2. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Efficacy of peg-interferon based treatment in patients with hepatitis C refractory to previous conventional interferon-based treatment

    International Nuclear Information System (INIS)

    Shaikh, S.; Devrajani, B.R.; Kalhoro, M.

    2012-01-01

    Objective: To determine the efficacy of peg-interferon-based therapy in patients refractory to previous conventional interferon-based treatment and factors predicting sustained viral response (SVR). Study Design: Analytical study. Place and Duration of Study: Medical Unit IV, Liaquat University Hospital, Jamshoro, from July 2009 to June 2011. Methodology: This study included consecutive patients of hepatitis C who were previously treated with conventional interferon-based treatment for 6 months but were either non-responders, relapsed or had virologic breakthrough and stage = 2 with fibrosis on liver biopsy. All eligible patients were provided peg-interferon at the dosage of 180 mu g weekly with ribavirin thrice a day for 6 months. Sustained Viral Response (SVR) was defined as absence of HCV RNA at twenty four week after treatment. All data was processed on SPSS version 16. Results: Out of 450 patients enrolled in the study, 192 were excluded from the study on the basis of minimal fibrosis (stage 0 and 1). Two hundred and fifty eight patients fulfilled the inclusion criteria and 247 completed the course of peg-interferon treatment. One hundred and sixty one (62.4%) were males and 97 (37.6%) were females. The mean age was 39.9 +- 6.1 years, haemoglobin was 11.49 +- 2.45 g/dl, platelet count was 127.2 +- 50.6 10/sup 3/ /mm/sup 3/, ALT was 99 +- 65 IU/L. SVR was achieved in 84 (32.6%). The strong association was found between SVR and the pattern of response (p = 0. 001), degree of fibrosis and early viral response (p = 0.001). Conclusion: Peg-interferon based treatment is an effective and safe treatment option for patients refractory to conventional interferon-based treatment. (author)

  4. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  5. Identifying Multiple Levels of Discussion-Based Teaching Strategies for Constructing Scientific Models

    Science.gov (United States)

    Williams, Grant; Clement, John

    2015-01-01

    This study sought to identify specific types of discussion-based strategies that two successful high school physics teachers using a model-based approach utilized in attempting to foster students' construction of explanatory models for scientific concepts. We found evidence that, in addition to previously documented dialogical strategies that…

  6. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  7. Constraint-Based Abstraction of a Model Checker for Infinite State Systems

    DEFF Research Database (Denmark)

    Banda, Gourinath; Gallagher, John Patrick

    Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal t...... to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver....

  8. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  9. Model-Based Power Plant Master Control

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Katarina; Thomas, Jean; Funkquist, Jonas

    2010-08-15

    The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are

  10. A pedagogical model for simulation-based learning in healthcare

    Directory of Open Access Journals (Sweden)

    Tuulikki Keskitalo

    2015-11-01

    Full Text Available The aim of this study was to design a pedagogical model for a simulation-based learning environment (SBLE in healthcare. Currently, simulation and virtual reality are a major focus in healthcare education. However, when and how these learning environments should be applied is not well-known. The present study tries to fill that gap. We pose the following research question: What kind of pedagogical model supports and facilitates students’ meaningful learning in SBLEs? The study used design-based research (DBR and case study approaches. We report the results from our second case study and how the pedagogical model was developed based on the lessons learned. The study involved nine facilitators and 25 students. Data were collected and analysed using mixed methods. The main result of this study is the refined pedagogical model. The model is based on the socio-cultural theory of learning and characteristics of meaningful learning as well as previous pedagogical models. The model will provide a more holistic and meaningful approach to teaching and learning in SBLEs. However, the model requires evidence and further development.

  11. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    Science.gov (United States)

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  12. The Importance of Business Model Factors for Cloud Computing Adoption: Role of Previous Experiences

    Directory of Open Access Journals (Sweden)

    Bogataj Habjan Kristina

    2017-08-01

    Full Text Available Background and Purpose: Bringing several opportunities for more effective and efficient IT governance and service exploitation, cloud computing is expected to impact the European and global economies significantly. Market data show that despite many advantages and promised benefits the adoption of cloud computing is not as fast and widespread as foreseen. This situation shows the need for further exploration of the potentials of cloud computing and its implementation on the market. The purpose of this research was to identify individual business model factors with the highest impact on cloud computing adoption. In addition, the aim was to identify the differences in opinion regarding the importance of business model factors on cloud computing adoption according to companies’ previous experiences with cloud computing services.

  13. Model-Based Engine Control Architecture with an Extended Kalman Filter

    Science.gov (United States)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  14. Effectiveness of Ritonavir-Boosted Protease Inhibitor Monotherapy in Clinical Practice Even with Previous Virological Failures to Protease Inhibitor-Based Regimens.

    Directory of Open Access Journals (Sweden)

    Luis F López-Cortés

    Full Text Available Significant controversy still exists about ritonavir-boosted protease inhibitor monotherapy (mtPI/rtv as a simplification strategy that is used up to now to treat patients that have not experienced previous virological failure (VF while on protease inhibitor (PI -based regimens. We have evaluated the effectiveness of two mtPI/rtv regimens in an actual clinical practice setting, including patients that had experienced previous VF with PI-based regimens.This retrospective study analyzed 1060 HIV-infected patients with undetectable viremia that were switched to lopinavir/ritonavir or darunavir/ritonavir monotherapy. In cases in which the patient had previously experienced VF while on a PI-based regimen, the lack of major HIV protease resistance mutations to lopinavir or darunavir, respectively, was mandatory. The primary endpoint of this study was the percentage of participants with virological suppression after 96 weeks according to intention-to-treat analysis (non-complete/missing = failure.A total of 1060 patients were analyzed, including 205 with previous VF while on PI-based regimens, 90 of whom were on complex therapies due to extensive resistance. The rates of treatment effectiveness (intention-to-treat analysis and virological efficacy (on-treatment analysis at week 96 were 79.3% (CI95, 76.8-81.8 and 91.5% (CI95, 89.6-93.4, respectively. No relationships were found between VF and earlier VF while on PI-based regimens, the presence of major or minor protease resistance mutations, the previous time on viral suppression, CD4+ T-cell nadir, and HCV-coinfection. Genotypic resistance tests were available in 49 out of the 74 patients with VFs and only four patients presented new major protease resistance mutations.Switching to mtPI/rtv achieves sustained virological control in most patients, even in those with previous VF on PI-based regimens as long as no major resistance mutations are present for the administered drug.

  15. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  16. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    Science.gov (United States)

    Grulke, Christopher M.; Chang, Daniel T.; Brooks, Raina D.; Leonard, Jeremy A.; Phillips, Martin B.; Hypes, Ethan D.; Fair, Matthew J.; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C.; Tan, Yu-Mei

    2016-01-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706

  17. GIS-Based Population Model Applied to Nevada Transportation Routes

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.

    1999-01-01

    Recently, a model based on geographic information system (GIS) processing of US Census Block data has made high-resolution population analysis for transportation risk analysis technically and economically feasible. Population density bordering each kilometer of a route may be tabulated with specific route sections falling into each of three categories (Rural, Suburban or Urban) identified for separate risk analysis. In addition to the improvement in resolution of Urban areas along a route, the model provides a statistically-based correction to population densities in Rural and Suburban areas where Census Block dimensions may greatly exceed the 800-meter scale of interest. A semi-automated application of the GIS model to a subset of routes in Nevada (related to the Yucca Mountain project) are presented, and the results compared to previous models including a model based on published Census and other data. These comparisons demonstrate that meaningful improvement in accuracy and specificity of transportation risk analyses is dependent on correspondingly accurate and geographically-specific population density data

  18. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  19. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  20. A physiologically-based pharmacokinetic(PB-PK) model for ethylene dibromide : relevance of extrahepatic metabolism

    NARCIS (Netherlands)

    Hissink, A M; Wormhoudt, L.W.; Sherratt, P.J.; Hayes, D.J.; Commandeur, J N; Vermeulen, N P; van Bladeren, P.J.

    A physiologically-based pharmacokinetic (PB-PK) model was developed for ethylene dibromide (1,2-dibromoethane, EDB) for rats and humans, partly based on previously published in vitro data (Ploemen et al., 1997). In the present study, this PB-PK model has been validated for the rat. In addition, new

  1. A physiologically-based pharmacokinetic (PB-PK) model for ethylene dibromide : relevance of extrahepatic metabolism

    NARCIS (Netherlands)

    Hissink, A.M.; Wormhoudt, L.W.; Sherratt, P.J.; Hayes, J.D.; Commandeur, J.N.M.; Vermeulen, N.P.E.; Bladeren, P.J. van

    2000-01-01

    A physiologically-based pharmacokinetic (PB-PK) model was developed for ethylene dibromide (1,2-dibromoethane, EDB) for rats and humans, partly based on previously published in vitro data (Ploemen et al., 1997). In the present study, this PB-PK model has been validated for the rat. In addition, new

  2. Not just the norm: exemplar-based models also predict face aftereffects.

    Science.gov (United States)

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  3. THE INFLUENCE OF THE ASSESSMENT MODEL AND METHOD TOWARD THE SCIENCE LEARNING ACHIEVEMENT BY CONTROLLING THE STUDENTS? PREVIOUS KNOWLEDGE OF MATHEMATICS.

    OpenAIRE

    Adam rumbalifar; I. g. n. Agung; Burhanuddin tola.

    2018-01-01

    This research aims to study the influence of the assessment model and method toward the science learning achievement by controlling the students? previous knowledge of mathematics. This study was conducted at SMP East Seram district with the population of 295 students. This study applied a quasi-experimental method with 2 X 2 factorial design using the ANCOVA model. The findings after controlling the students\\' previous knowledge of mathematics show that the science learning achievement of th...

  4. Augment clinical measurement using a constraint-based esophageal model

    Science.gov (United States)

    Kou, Wenjun; Acharya, Shashank; Kahrilas, Peter; Patankar, Neelesh; Pandolfino, John

    2017-11-01

    Quantifying the mechanical properties of the esophageal wall is crucial to understanding impairments of trans-esophageal flow characteristic of several esophageal diseases. However, these data are unavailable owing to technological limitations of current clinical diagnostic instruments that instead display esophageal luminal cross sectional area based on intraluminal impedance change. In this work, we developed an esophageal model to predict bolus flow and the wall property based on clinical measurements. The model used the constraint-based immersed-boundary method developed previously by our group. Specifically, we first approximate the time-dependent wall geometry based on impedance planimetry data on luminal cross sectional area. We then fed these along with pressure data into the model and computed wall tension based on simulated pressure and flow fields, and the material property based on the strain-stress relationship. As examples, we applied this model to augment FLIP (Functional Luminal Imaging Probe) measurements in three clinical cases: a normal subject, achalasia, and eosinophilic esophagitis (EoE). Our findings suggest that the wall stiffness was greatest in the EoE case, followed by the achalasia case, and then the normal. This is supported by NIH Grant R01 DK56033 and R01 DK079902.

  5. Previous experience in manned space flight: A survey of human factors lessons learned

    Science.gov (United States)

    Chandlee, George O.; Woolford, Barbara

    1993-01-01

    Previous experience in manned space flight programs can be used to compile a data base of human factors lessons learned for the purpose of developing aids in the future design of inhabited spacecraft. The objectives are to gather information available from relevant sources, to develop a taxonomy of human factors data, and to produce a data base that can be used in the future for those people involved in the design of manned spacecraft operations. A study is currently underway at the Johnson Space Center with the objective of compiling, classifying, and summarizing relevant human factors data bearing on the lessons learned from previous manned space flights. The research reported defines sources of data, methods for collection, and proposes a classification for human factors data that may be a model for other human factors disciplines.

  6. An agent-based computational model for tuberculosis spreading on age-structured populations

    Science.gov (United States)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  7. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  8. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  9. Impacts of previous crops on Fusarium foot and root rot, and on yields of durum wheat in North West Tunisia

    Directory of Open Access Journals (Sweden)

    Samia CHEKALI

    2016-07-01

    Full Text Available The impacts of ten previous crop rotations (cereals, legumes and fallow on Fusarium foot and root rot of durum wheat were investigated for three cropping seasons in a trial established in 2004 in Northwest Tunisia. Fungi isolated from the roots and stem bases were identified using morphological and molecular methods, and were primarily Fusarium culmorum and F. pseudograminearum. Under low rainfall conditions, the previous crop affected F. pseudograminearum incidence on durum wheat roots but not F. culmorum. Compared to continuous cropping of durum wheat, barley as a previous crop increased disease incidence more than fivefold, while legumes and fallow tended to reduce incidence.  Barley as a previous crop increased wheat disease severity by 47%, compared to other rotations. Grain yield was negatively correlated with the incidence of F. culmorum infection, both in roots and stem bases, and fitted an exponential model (R2 = -0.61 for roots and -0.77 for stem bases, P<0.0001. Fusarium pseudograminearum was also negatively correlated with yield and fitted an exponential model (R2 = -0.53 on roots and -0.71 on stem bases, P < 0.0001 but was not correlated with severity.

  10. An Outcrop-based Detailed Geological Model to Test Automated Interpretation of Seismic Inversion Results

    NARCIS (Netherlands)

    Feng, R.; Sharma, S.; Luthi, S.M.; Gisolf, A.

    2015-01-01

    Previously, Tetyukhina et al. (2014) developed a geological and petrophysical model based on the Book Cliffs outcrops that contained eight lithotypes. For reservoir modelling purposes, this model is judged to be too coarse because in the same lithotype it contains reservoir and non-reservoir

  11. Model-Based Battery Management Systems: From Theory to Practice

    Science.gov (United States)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  12. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  13. Brief introductory guide to agent-based modeling and an illustration from urban health research

    Directory of Open Access Journals (Sweden)

    Amy H. Auchincloss

    2015-11-01

    Full Text Available Abstract There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.

  14. Brief introductory guide to agent-based modeling and an illustration from urban health research.

    Science.gov (United States)

    Auchincloss, Amy H; Garcia, Leandro Martin Totaro

    2015-11-01

    There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.

  15. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  16. Segment-based acoustic models for continuous speech recognition

    Science.gov (United States)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  17. The Actualization of Literary Learning Model Based on Verbal-Linguistic Intelligence

    Directory of Open Access Journals (Sweden)

    Nur Ihsan Halil

    2017-10-01

    Full Text Available This article is inspired by Howard Gardner's concept of linguistic intelligence and also from some authors' previous writings. All of them became the authors' reference in developing ideas on constructing a literary learning model based on linguistic intelligence. The writing of this article is not done by collecting data empirically, but by developing and constructing an existing concept, namely the concept of linguistic intelligence, which is disseminated into a literature-based learning of verbal-linguistic intelligence. The purpose of this paper is to answer the question of how to apply the literary learning model based on the verbal-linguistic intelligence. Then, regarding Gardner's concept, the author formulated a literary learning model based on the verbal-linguistic intelligence through a story-telling learning model with five steps namely arguing, discussing, interpreting, speaking, and writing about literary works. In short, the writer draw a conclusion that learning-based models of verbal-linguistic intelligence can be designed with attention into five components namely (1 definition, (2 characteristics, (3 teaching strategy, (4 final learning outcomes, and (5 figures.

  18. Response to health insurance by previously uninsured rural children.

    Science.gov (United States)

    Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L

    1999-08-01

    To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.

  19. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  20. Cultivation-based multiplex phenotyping of human gut microbiota allows targeted recovery of previously uncultured bacteria

    DEFF Research Database (Denmark)

    Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten

    2014-01-01

    The human gut microbiota is linked to a variety of human health issues and implicated in antibiotic resistance gene dissemination. Most of these associations rely on culture-independent methods, since it is commonly believed that gut microbiota cannot be easily or sufficiently cultured. Here, we...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...

  1. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050

  2. Previous Experience a Model of Practice UNAE

    Directory of Open Access Journals (Sweden)

    Ormary Barberi Ruiz

    2017-02-01

    Full Text Available The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP of the National University of Education (UNAE of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials, pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subject nature of the pre professional practice and the demand of socio educational contexts where the practices have been emerging to resize them. By relating these elements allowed conceiving the modeling of the processes of the pre-professional practices for the development of professional skills of future teachers through four components: contextual projective, implementation (tutoring, accompaniment (teaching couple and monitoring (meetings at the beginning, during and end of practice. The initial training of teachers is inherent to teaching (academic and professional training, research and links with the community, these are fundamental pillars of Ecuadorian higher education.

  3. Using a symbolic process model as input for model-based fMRI analysis : Locating the neural correlates of problem state replacements

    NARCIS (Netherlands)

    Borst, J.P.; Taatgen, N.A.; Van Rijn, D.H.

    2011-01-01

    In this paper, a model-based analysis method for fMRI is used with a high-level symbolic process model. Participants performed a triple-task in which intermediate task information needs to be updated frequently. Previous work has shown that the associated resource - the problem state resource - acts

  4. A trace-based model for multiparty contracts

    DEFF Research Database (Denmark)

    Hvitved, Tom; Klaedtke, Felix; Zălinescu, Eugen

    2012-01-01

    In this article we present a model for multiparty contracts in which contract conformance is defned abstractly as a property on traces. A key feature of our model is blame assignment, which means that for a given contract, every breach is attributed to a set of parties. We show that blame...... assignment is compositional by de¿ning contract conjunction and contract disjunction. Moreover, to specify real-world contracts, we introduce the contract speci¿cation language CSL with an operational semantics. We show that each CSL contract has a counterpart in our trace-based model and from...... the operational semantics we derive a run-time monitor. CSL overcomes limitations of previously proposed formalisms for specifying contracts by supporting: (history sensitive and conditional) commitments, parametrised contract templates, relative and absolute temporal constraints, potentially in¿nite contracts...

  5. In vivo dentate nucleus MRI relaxometry correlates with previous administration of Gadolinium-based contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Tedeschi, Enrico; Canna, Antonietta; Cocozza, Sirio; Russo, Carmela; Angelini, Valentina; Brunetti, Arturo [University ' ' Federico II' ' , Neuroradiology, Department of Advanced Biomedical Sciences, Naples (Italy); Palma, Giuseppe; Quarantelli, Mario [National Research Council, Institute of Biostructure and Bioimaging, Naples (Italy); Borrelli, Pasquale; Salvatore, Marco [IRCCS SDN, Naples (Italy); Lanzillo, Roberta; Postiglione, Emanuela; Morra, Vincenzo Brescia [University ' ' Federico II' ' , Department of Neurosciences, Reproductive and Odontostomatological Sciences, Naples (Italy)

    2016-12-15

    To evaluate changes in T1 and T2* relaxometry of dentate nuclei (DN) with respect to the number of previous administrations of Gadolinium-based contrast agents (GBCA). In 74 relapsing-remitting multiple sclerosis (RR-MS) patients with variable disease duration (9.8±6.8 years) and severity (Expanded Disability Status Scale scores:3.1±0.9), the DN R1 (1/T1) and R2* (1/T2*) relaxation rates were measured using two unenhanced 3D Dual-Echo spoiled Gradient-Echo sequences with different flip angles. Correlations of the number of previous GBCA administrations with DN R1 and R2* relaxation rates were tested, including gender and age effect, in a multivariate regression analysis. The DN R1 (normalized by brainstem) significantly correlated with the number of GBCA administrations (p<0.001), maintaining the same significance even when including MS-related factors. Instead, the DN R2* values correlated only with age (p=0.003), and not with GBCA administrations (p=0.67). In a subgroup of 35 patients for whom the administered GBCA subtype was known, the effect of GBCA on DN R1 appeared mainly related to linear GBCA. In RR-MS patients, the number of previous GBCA administrations correlates with R1 relaxation rates of DN, while R2* values remain unaffected, suggesting that T1-shortening in these patients is related to the amount of Gadolinium given. (orig.)

  6. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  7. Allometric Models to Predict Aboveground Woody Biomass of Black Locust (Robinia pseudoacacia L. in Short Rotation Coppice in Previous Mining and Agricultural Areas in Germany

    Directory of Open Access Journals (Sweden)

    Christin Carl

    2017-09-01

    Full Text Available Black locust is a drought-resistant tree species with high biomass productivity during juvenility; it is able to thrive on wastelands, such as former brown coal fields and dry agricultural areas. However, research conducted on this species in such areas is limited. This paper aims to provide a basis for predicting tree woody biomass for black locust based on tree, competition, and site variables at 14 sites in northeast Germany that were previously utilized for mining or agriculture. The study areas, which are located in an area covering 320 km × 280 km, are characterized by a variety of climatic and soil conditions. Influential variables, including tree parameters, competition, and climatic parameters were considered. Allometric biomass models were employed. The findings show that the most important parameters are tree and competition variables. Different former land utilizations, such as mining or agriculture, as well as growth by cores or stumps, significantly influenced aboveground woody biomass production. The new biomass models developed as part of this study can be applied to calculate woody biomass production and carbon sequestration of Robinia pseudoacacia L. in short rotation coppices in previous mining and agricultural areas.

  8. Development of a CANDU Moderator Analysis Model; Based on Coupled Solver

    International Nuclear Information System (INIS)

    Yoon, Churl; Park, Joo Hwan

    2006-01-01

    A CFD model for predicting the CANDU-6 moderator temperature has been developed for several years in KAERI, which is based on CFX-4. This analytic model(CFX4-CAMO) has some strength in the modeling of hydraulic resistance in the core region and in the treatment of heat source term in the energy equations. But the convergence difficulties and slow computing speed reveal to be the limitations of this model, because the CFX-4 code adapts a segregated solver to solve the governing equations with strong coupled-effect. Compared to CFX-4 using segregated solver, CFX-10 adapts high efficient and robust coupled-solver. Before December 2005 when CFX-10 was distributed, the previous version of CFX-10(CFX-5. series) also adapted coupled solver but didn't have any capability to apply porous media approaches correctly. In this study, the developed moderator analysis model based on CFX- 4 (CFX4-CAMO) is transformed into a new moderator analysis model based on CFX-10. The new model is examined and the results are compared to the former

  9. Determining root correspondence between previously and newly detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  10. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    Science.gov (United States)

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  11. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  12. Reasoning with Previous Decisions: Beyond the Doctrine of Precedent

    DEFF Research Database (Denmark)

    Komárek, Jan

    2013-01-01

    in different jurisdictions use previous judicial decisions in their argument, we need to move beyond the concept of precedent to a wider notion, which would embrace practices and theories in legal systems outside the Common law tradition. This article presents the concept of ‘reasoning with previous decisions...... law method’, but they are no less rational and intellectually sophisticated. The reason for the rather conceited attitude of some comparatists is in the dominance of the common law paradigm of precedent and the accompanying ‘case law method’. If we want to understand how courts and lawyers......’ as such an alternative and develops its basic models. The article first points out several shortcomings inherent in limiting the inquiry into reasoning with previous decisions by the common law paradigm (1). On the basis of numerous examples provided in section (1), I will present two basic models of reasoning...

  13. Turn-based evolution in a simplified model of artistic creative process

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2015-01-01

    Evolutionary computation has often been presented as a possible model for creativity in computers. In this paper, evolution is discussed in the light of a theoretical model of human artistic process, recently presented by the author. Some crucial differences between human artistic creativity......, and the results of initial experiments are presented and discussed. Artistic creativity is here modeled as an iterated turn-based process, alternating between a conceptual representation and a material representation of the work-to-be. Evolutionary computation is proposed as a heuristic solution to the principal...... and natural evolution are observed and discussed, also in the light of other creative processes occurring in nature. As a tractable way to overcome these limitations, a new kind of evolutionary implementation of creativity is proposed, based on a simplified version of the previously presented model...

  14. Daily Based Morgan–Morgan–Finney (DMMF Model: A Spatially Distributed Conceptual Soil Erosion Model to Simulate Complex Soil Surface Configurations

    Directory of Open Access Journals (Sweden)

    Kwanghun Choi

    2017-04-01

    Full Text Available In this paper, we present the Daily based Morgan–Morgan–Finney model. The main processes in this model are based on the Morgan–Morgan–Finney soil erosion model, and it is suitable for estimating surface runoff and sediment redistribution patterns in seasonal climate regions with complex surface configurations. We achieved temporal flexibility by utilizing daily time steps, which is suitable for regions with concentrated seasonal rainfall. We introduce the proportion of impervious surface cover as a parameter to reflect its impacts on soil erosion through blocking water infiltration and protecting the soil from detachment. Also, several equations and sequences of sub-processes are modified from the previous model to better represent physical processes. From the sensitivity analysis using the Sobol’ method, the DMMF model shows the rational response to the input parameters which is consistent with the result from the previous versions. To evaluate the model performance, we applied the model to two potato fields in South Korea that had complex surface configurations using plastic covered ridges at various temporal periods during the monsoon season. Our new model shows acceptable performance for runoff and the sediment loss estimation ( NSE ≥ 0.63 , | PBIAS | ≤ 17.00 , and RSR ≤ 0.57 . Our findings demonstrate that the DMMF model is able to predict the surface runoff and sediment redistribution patterns for cropland with complex surface configurations.

  15. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  16. Previous Experience a Model of Practice UNAE

    OpenAIRE

    Ormary Barberi Ruiz; María Dolores Pesántez Palacios

    2017-01-01

    The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP) of the National University of Education (UNAE) of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials), pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subj...

  17. Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy.

    Science.gov (United States)

    Morimoto, Satoshi; Remijn, Gerard B; Nakajima, Yoshitaka

    2016-01-01

    Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.

  18. Assessment of damage localization based on spatial filters using numerical crack propagation models

    International Nuclear Information System (INIS)

    Deraemaeker, Arnaud

    2011-01-01

    This paper is concerned with vibration based structural health monitoring with a focus on non-model based damage localization. The type of damage investigated is cracking of concrete structures due to the loss of prestress. In previous works, an automated method based on spatial filtering techniques applied to large dynamic strain sensor networks has been proposed and tested using data from numerical simulations. In the simulations, simplified representations of cracks (such as a reduced Young's modulus) have been used. While this gives the general trend for global properties such as eigen frequencies, the change of more local features, such as strains, is not adequately represented. Instead, crack propagation models should be used. In this study, a first attempt is made in this direction for concrete structures (quasi brittle material with softening laws) using crack-band models implemented in the commercial software DIANA. The strategy consists in performing a non-linear computation which leads to cracking of the concrete, followed by a dynamic analysis. The dynamic response is then used as the input to the previously designed damage localization system in order to assess its performances. The approach is illustrated on a simply supported beam modeled with 2D plane stress elements.

  19. The Actualization of Literary Learning Model Based on Verbal-Linguistic Intelligence

    Science.gov (United States)

    Hali, Nur Ihsan

    2017-01-01

    This article is inspired by Howard Gardner's concept of linguistic intelligence and also from some authors' previous writings. All of them became the authors' reference in developing ideas on constructing a literary learning model based on linguistic intelligence. The writing of this article is not done by collecting data empirically, but by…

  20. From exemplar to grammar: a probabilistic analogy-based model of language learning.

    Science.gov (United States)

    Bod, Rens

    2009-07-01

    While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions. Copyright © 2009 Cognitive Science Society, Inc.

  1. 75 FR 28485 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2010-05-21

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39... previously held by Israel Aircraft Industries, Ltd.) Model Gulfstream 100 airplanes; and Model Astra SPX and...

  2. Benchmarking in pathology: development of an activity-based costing model.

    Science.gov (United States)

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  3. Improved ability of biological and previous caries multimarkers to predict caries disease as revealed by multivariate PLS modelling

    Directory of Open Access Journals (Sweden)

    Ericson Thorild

    2009-11-01

    Full Text Available Abstract Background Dental caries is a chronic disease with plaque bacteria, diet and saliva modifying disease activity. Here we have used the PLS method to evaluate a multiplicity of such biological variables (n = 88 for ability to predict caries in a cross-sectional (baseline caries and prospective (2-year caries development setting. Methods Multivariate PLS modelling was used to associate the many biological variables with caries recorded in thirty 14-year-old children by measuring the numbers of incipient and manifest caries lesions at all surfaces. Results A wide but shallow gliding scale of one fifth caries promoting or protecting, and four fifths non-influential, variables occurred. The influential markers behaved in the order of plaque bacteria > diet > saliva, with previously known plaque bacteria/diet markers and a set of new protective diet markers. A differential variable patterning appeared for new versus progressing lesions. The influential biological multimarkers (n = 18 predicted baseline caries better (ROC area 0.96 than five markers (0.92 and a single lactobacilli marker (0.7 with sensitivity/specificity of 1.87, 1.78 and 1.13 at 1/3 of the subjects diagnosed sick, respectively. Moreover, biological multimarkers (n = 18 explained 2-year caries increment slightly better than reported before but predicted it poorly (ROC area 0.76. By contrast, multimarkers based on previous caries predicted alone (ROC area 0.88, or together with biological multimarkers (0.94, increment well with a sensitivity/specificity of 1.74 at 1/3 of the subjects diagnosed sick. Conclusion Multimarkers behave better than single-to-five markers but future multimarker strategies will require systematic searches for improved saliva and plaque bacteria markers.

  4. Cost-Effectiveness Model for Chemoimmunotherapy Options in Patients with Previously Untreated Chronic Lymphocytic Leukemia Unsuitable for Full-Dose Fludarabine-Based Therapy.

    Science.gov (United States)

    Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal

    2016-06-01

    To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Development and Analysis of Patient-Based Complete Conducting Airways Models.

    Directory of Open Access Journals (Sweden)

    Rafel Bordas

    Full Text Available The analysis of high-resolution computed tomography (CT images of the lung is dependent on inter-subject differences in airway geometry. The application of computational models in understanding the significance of these differences has previously been shown to be a useful tool in biomedical research. Studies using image-based geometries alone are limited to the analysis of the central airways, down to generation 6-10, as other airways are not visible on high-resolution CT. However, airways distal to this, often termed the small airways, are known to play a crucial role in common airway diseases such as asthma and chronic obstructive pulmonary disease (COPD. Other studies have incorporated an algorithmic approach to extrapolate CT segmented airways in order to obtain a complete conducting airway tree down to the level of the acinus. These models have typically been used for mechanistic studies, but also have the potential to be used in a patient-specific setting. In the current study, an image analysis and modelling pipeline was developed and applied to a number of healthy (n = 11 and asthmatic (n = 24 CT patient scans to produce complete patient-based airway models to the acinar level (mean terminal generation 15.8 ± 0.47. The resulting models are analysed in terms of morphometric properties and seen to be consistent with previous work. A number of global clinical lung function measures are compared to resistance predictions in the models to assess their suitability for use in a patient-specific setting. We show a significant difference (p < 0.01 in airways resistance at all tested flow rates in complete airway trees built using CT data from severe asthmatics (GINA 3-5 versus healthy subjects. Further, model predictions of airways resistance at all flow rates are shown to correlate with patient forced expiratory volume in one second (FEV1 (Spearman ρ = -0.65, p < 0.001 and, at low flow rates (0.00017 L/s, FEV1 over forced vital capacity (FEV1

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  8. Map-Based Channel Model for Urban Macrocell Propagation Scenarios

    Directory of Open Access Journals (Sweden)

    Jose F. Monserrat

    2015-01-01

    Full Text Available The evolution of LTE towards 5G has started and different research projects and institutions are in the process of verifying new technology components through simulations. Coordination between groups is strongly recommended and, in this sense, a common definition of test cases and simulation models is needed. The scope of this paper is to present a realistic channel model for urban macrocell scenarios. This model is map-based and takes into account the layout of buildings situated in the area under study. A detailed description of the model is given together with a comparison with other widely used channel models. The benchmark includes a measurement campaign in which the proposed model is shown to be much closer to the actual behavior of a cellular system. Particular attention is given to the outdoor component of the model, since it is here where the proposed approach is showing main difference with other previous models.

  9. Bayesian inference based modelling for gene transcriptional dynamics by integrating multiple source of knowledge

    Directory of Open Access Journals (Sweden)

    Wang Shu-Qiang

    2012-07-01

    Full Text Available Abstract Background A key challenge in the post genome era is to identify genome-wide transcriptional regulatory networks, which specify the interactions between transcription factors and their target genes. Numerous methods have been developed for reconstructing gene regulatory networks from expression data. However, most of them are based on coarse grained qualitative models, and cannot provide a quantitative view of regulatory systems. Results A binding affinity based regulatory model is proposed to quantify the transcriptional regulatory network. Multiple quantities, including binding affinity and the activity level of transcription factor (TF are incorporated into a general learning model. The sequence features of the promoter and the possible occupancy of nucleosomes are exploited to estimate the binding probability of regulators. Comparing with the previous models that only employ microarray data, the proposed model can bridge the gap between the relative background frequency of the observed nucleotide and the gene's transcription rate. Conclusions We testify the proposed approach on two real-world microarray datasets. Experimental results show that the proposed model can effectively identify the parameters and the activity level of TF. Moreover, the kinetic parameters introduced in the proposed model can reveal more biological sense than previous models can do.

  10. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  11. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  12. Re-evaluation of model-based light-scattering spectroscopy for tissue spectroscopy

    Science.gov (United States)

    Lau, Condon; Šćepanović, Obrad; Mirkovic, Jelena; McGee, Sasha; Yu, Chung-Chieh; Fulghum, Stephen; Wallace, Michael; Tunnell, James; Bechtel, Kate; Feld, Michael

    2009-01-01

    Model-based light scattering spectroscopy (LSS) seemed a promising technique for in-vivo diagnosis of dysplasia in multiple organs. In the studies, the residual spectrum, the difference between the observed and modeled diffuse reflectance spectra, was attributed to single elastic light scattering from epithelial nuclei, and diagnostic information due to nuclear changes was extracted from it. We show that this picture is incorrect. The actual single scattering signal arising from epithelial nuclei is much smaller than the previously computed residual spectrum, and does not have the wavelength dependence characteristic of Mie scattering. Rather, the residual spectrum largely arises from assuming a uniform hemoglobin distribution. In fact, hemoglobin is packaged in blood vessels, which alters the reflectance. When we include vessel packaging, which accounts for an inhomogeneous hemoglobin distribution, in the diffuse reflectance model, the reflectance is modeled more accurately, greatly reducing the amplitude of the residual spectrum. These findings are verified via numerical estimates based on light propagation and Mie theory, tissue phantom experiments, and analysis of published data measured from Barrett’s esophagus. In future studies, vessel packaging should be included in the model of diffuse reflectance and use of model-based LSS should be discontinued. PMID:19405760

  13. Estimation model for evaporative emissions from gasoline vehicles based on thermodynamics.

    Science.gov (United States)

    Hata, Hiroo; Yamada, Hiroyuki; Kokuryo, Kazuo; Okada, Megumi; Funakubo, Chikage; Tonokura, Kenichi

    2018-03-15

    In this study, we conducted seven-day diurnal breathing loss (DBL) tests on gasoline vehicles. We propose a model based on the theory of thermodynamics that can represent the experimental results of the current and previous studies. The experiments were performed using 14 physical parameters to determine the dependence of total emissions on temperature, fuel tank fill, and fuel vapor pressure. In most cases, total emissions after an apparent breakthrough were proportional to the difference between minimum and maximum environmental temperatures during the day, fuel tank empty space, and fuel vapor pressure. Volatile organic compounds (VOCs) were measured using a Gas Chromatography Mass Spectrometer and Flame Ionization Detector (GC-MS/FID) to determine the Ozone Formation Potential (OFP) of after-breakthrough gas emitted to the atmosphere. Using the experimental results, we constructed a thermodynamic model for estimating the amount of evaporative emissions after a fully saturated canister breakthrough occurred, and a comparison between the thermodynamic model and previous models was made. Finally, the total annual evaporative emissions and OFP in Japan were determined and compared by each model. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Coastal aquifer management based on surrogate models and multi-objective optimization

    Science.gov (United States)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    The demand for fresh water in coastal areas and islands can be very high, especially in summer months, due to increased local needs and tourism. In order to satisfy demand, a combined management plan is proposed which involves: i) desalinization (if needed) of pumped water to a potable level using reverse osmosis and ii) injection of biologically treated waste water into the aquifer. The management plan is formulated into a multiobjective optimization framework, where simultaneous minimization of economic and environmental costs is desired; subject to a constraint to satisfy demand. The method requires modeling tools, which are able to predict the salinity levels of the aquifer in response to different alternative management scenarios. Variable density models can simulate the interaction between fresh and saltwater; however, they are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNN)]. The surrogate models are trained adaptively during optimization based on a Genetic Algorithm. In the crossover step of the genetic algorithm, each pair of parents generates a pool of offspring. All offspring are evaluated based on the fast surrogate model. Then only the most promising offspring are evaluated based on the exact numerical model. This eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. Three new criteria for selecting the most promising offspring were proposed, which improve the Pareto set and maintain the diversity of the optimum solutions. The method has important advancements compared to previous methods, e.g. alleviation of propagation of errors due to surrogate model approximations. The method is applied to a real coastal aquifer in the island of Santorini which is a very touristy island with high water demands. The results show that the algorithm

  15. Hot news recommendation system from heterogeneous websites based on bayesian model.

    Science.gov (United States)

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  16. 6D F-theory models and elliptically fibered Calabi-Yau threefolds over semi-toric base surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Martini, Gabriella; Taylor, Washington [Center for Theoretical Physics, Department of Physics, Massachusetts Institute of Technology,77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2015-06-10

    We carry out a systematic study of a class of 6D F-theory models and associated Calabi-Yau threefolds that are constructed using base surfaces with a generalization of toric structure. In particular, we determine all smooth surfaces with a structure invariant under a single ℂ{sup ∗} action (sometimes called “T-varieties” in the mathematical literature) that can act as bases for an elliptic fibration with section of a Calabi-Yau threefold. We identify 162,404 distinct bases, which include as a subset the previously studied set of strictly toric bases. Calabi-Yau threefolds constructed in this fashion include examples with previously unknown Hodge numbers. There are also bases over which the generic elliptic fibration has a Mordell-Weil group of sections with nonzero rank, corresponding to non-Higgsable U(1) factors in the 6D supergravity model; this type of structure does not arise for generic elliptic fibrations in the purely toric context.

  17. 6D F-theory models and elliptically fibered Calabi-Yau threefolds over semi-toric base surfaces

    International Nuclear Information System (INIS)

    Martini, Gabriella; Taylor, Washington

    2015-01-01

    We carry out a systematic study of a class of 6D F-theory models and associated Calabi-Yau threefolds that are constructed using base surfaces with a generalization of toric structure. In particular, we determine all smooth surfaces with a structure invariant under a single ℂ ∗ action (sometimes called “T-varieties” in the mathematical literature) that can act as bases for an elliptic fibration with section of a Calabi-Yau threefold. We identify 162,404 distinct bases, which include as a subset the previously studied set of strictly toric bases. Calabi-Yau threefolds constructed in this fashion include examples with previously unknown Hodge numbers. There are also bases over which the generic elliptic fibration has a Mordell-Weil group of sections with nonzero rank, corresponding to non-Higgsable U(1) factors in the 6D supergravity model; this type of structure does not arise for generic elliptic fibrations in the purely toric context.

  18. Refinement of protein termini in template-based modeling using conformational space annealing.

    Science.gov (United States)

    Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung

    2011-09-01

    The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.

  19. Enabling full field physics based OPC via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-03-01

    As EUV lithography marches closer to reality for high volume production, its peculiar modeling challenges related to both inter- and intra- field effects has necessitated building OPC infrastructure that operates with field position dependency. Previous state of the art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7nm and 5nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of EPE errors. The introduction of Dynamic Model Generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through field. DMG allows unique models for EMF, apodization, aberrations, etc to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  20. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  1. MODELING ATMOSPHERIC EMISSION FOR CMB GROUND-BASED OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Errard, J.; Borrill, J. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Ade, P. A. R. [School of Physics and Astronomy, Cardiff University, Cardiff CF10 3XQ (United Kingdom); Akiba, Y.; Chinone, Y. [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801 (Japan); Arnold, K.; Atlas, M.; Barron, D.; Elleflot, T. [Department of Physics, University of California, San Diego, CA 92093-0424 (United States); Baccigalupi, C.; Fabbian, G. [International School for Advanced Studies (SISSA), Trieste I-34014 (Italy); Boettger, D. [Department of Astronomy, Pontifica Universidad Catolica de Chile (Chile); Chapman, S. [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, NS, B3H 4R2 (Canada); Cukierman, A. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Delabrouille, J. [AstroParticule et Cosmologie, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité (France); Dobbs, M.; Gilbert, A. [Physics Department, McGill University, Montreal, QC H3A 0G4 (Canada); Ducout, A.; Feeney, S. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Feng, C. [Department of Physics and Astronomy, University of California, Irvine (United States); and others

    2015-08-10

    Atmosphere is one of the most important noise sources for ground-based cosmic microwave background (CMB) experiments. By increasing optical loading on the detectors, it amplifies their effective noise, while its fluctuations introduce spatial and temporal correlations between detected signals. We present a physically motivated 3D-model of the atmosphere total intensity emission in the millimeter and sub-millimeter wavelengths. We derive a new analytical estimate for the correlation between detectors time-ordered data as a function of the instrument and survey design, as well as several atmospheric parameters such as wind, relative humidity, temperature and turbulence characteristics. Using an original numerical computation, we examine the effect of each physical parameter on the correlations in the time series of a given experiment. We then use a parametric-likelihood approach to validate the modeling and estimate atmosphere parameters from the polarbear-i project first season data set. We derive a new 1.0% upper limit on the linear polarization fraction of atmospheric emission. We also compare our results to previous studies and weather station measurements. The proposed model can be used for realistic simulations of future ground-based CMB observations.

  2. Towards Agent-Based Model Specification in Smart Grid: A Cognitive Agent-based Computing Approach

    OpenAIRE

    Akram, Waseem; Niazi, Muaz A.; Iantovics, Laszlo Barna

    2017-01-01

    A smart grid can be considered as a complex network where each node represents a generation unit or a consumer. Whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling (ABM) paradigm. An ABM is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the ABM paradigm. However, to the best of our know...

  3. [Prevalence of previously diagnosed diabetes mellitus in Mexico.

    Science.gov (United States)

    Rojas-Martínez, Rosalba; Basto-Abreu, Ana; Aguilar-Salinas, Carlos A; Zárate-Rojas, Emiliano; Villalpando, Salvador; Barrientos-Gutiérrez, Tonatiuh

    2018-01-01

    To compare the prevalence of previously diagnosed diabetes in 2016 with previous national surveys and to describe treatment and its complications. Mexico's national surveys Ensa 2000, Ensanut 2006, 2012 and 2016 were used. For 2016, logistic regression models and measures of central tendency and dispersion were obtained. The prevalence of previously diagnosed diabetes in 2016 was 9.4%. The increase of 2.2% relative to 2012 was not significant and only observed in patients older than 60 years. While preventive measures have increased, the access to medical treatment and lifestyle has not changed. The treatment has been modified, with an increase in insulin and decrease in hypoglycaemic agents. Population aging, lack of screening actions and the increase in diabetes complications will lead to an increase on the burden of disease. Policy measures targeting primary and secondary prevention of diabetes are crucial.

  4. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  5. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Directory of Open Access Journals (Sweden)

    Enrico Prenesti

    2012-01-01

    Full Text Available A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria. Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture, ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing.

  6. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  7. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  8. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    Science.gov (United States)

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  9. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  10. Beyond accessibility? Toward an on-line and memory-based model of framing effects

    OpenAIRE

    Matthes, Jörg

    2007-01-01

    This theoretical article investigates the effects of media frames on individuals' judgments. In contrast to previous theorizing, we suggest that framing scholars should embrace both, on-line and memory-based judgment formation processes. Based on that premise, we propose a model that distinguishes between two phases of framing effects. Along the first phase, the media's framing contributes to the formation of an on-line or a memory-based judgment. The second phase describes six hypothetical r...

  11. Multi-day activity scheduling reactions to planned activities and future events in a dynamic agent-based model of activity-travel behavior

    NARCIS (Netherlands)

    Nijland, E.W.L.; Arentze, T.A.; Timmermans, H.J.P.

    2009-01-01

    Modeling multi-day planning has received scarce attention today in activity-based transport demand modeling. Elaborating and combining previous work on event-driven activity generation, the aim of this paper is to develop and illustrate an extension of a need-based model of activity generation that

  12. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  13. A Rough Set-Based Model of HIV-1 Reverse Transcriptase Resistome

    Directory of Open Access Journals (Sweden)

    Marcin Kierczak

    2009-10-01

    Full Text Available Reverse transcriptase (RT is a viral enzyme crucial for HIV-1 replication. Currently, 12 drugs are targeted against the RT. The low fidelity of the RT-mediated transcription leads to the quick accumulation of drug-resistance mutations. The sequence-resistance relationship remains only partially understood. Using publicly available data collected from over 15 years of HIV proteome research, we have created a general and predictive rule-based model of HIV-1 resistance to eight RT inhibitors. Our rough set-based model considers changes in the physicochemical properties of a mutated sequence as compared to the wild-type strain. Thanks to the application of the Monte Carlo feature selection method, the model takes into account only the properties that significantly contribute to the resistance phenomenon. The obtained results show that drug-resistance is determined in more complex way than believed. We confirmed the importance of many resistance-associated sites, found some sites to be less relevant than formerly postulated and— more importantly—identified several previously neglected sites as potentially relevant. By mapping some of the newly discovered sites on the 3D structure of the RT, we were able to suggest possible molecular-mechanisms of drug-resistance. Importantly, our model has the ability to generalize predictions to the previously unseen cases. The study is an example of how computational biology methods can increase our understanding of the HIV-1 resistome.

  14. A Parameter-based Model for Generating Culturally Adaptive Nonverbal Behaviors in Embodied Conversational Agents

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2009-01-01

    The goal of this paper is to integrate culture as a computational term in embodied conversational agents by employing an empirical data-driven approach as well as a theoretical model-driven approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific...... cultures. First, we introduce the Hofstede theory to describe socio-cultural characteristics of each country. Then, based on the previous studies in cultural differences of nonverbal behaviors, we propose expressive parameters to characterize nonverbal behaviors. Finally, by integrating socio-cultural...

  15. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  16. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  17. Function-based payment model for inpatient medical rehabilitation: an evaluation.

    Science.gov (United States)

    Sutton, J P; DeJong, G; Wilkerson, D

    1996-07-01

    To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.

  18. Site selection model for new metro stations based on land use

    Science.gov (United States)

    Zhang, Nan; Chen, Xuewu

    2015-12-01

    Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.

  19. Modelling mass and heat transfer in nano-based cancer hyperthermia.

    Science.gov (United States)

    Nabil, M; Decuzzi, P; Zunino, P

    2015-10-01

    We derive a sophisticated mathematical model for coupled heat and mass transport in the tumour microenvironment and we apply it to study nanoparticle delivery and hyperthermic treatment of cancer. The model has the unique ability of combining the following features: (i) realistic vasculature; (ii) coupled capillary and interstitial flow; (iii) coupled capillary and interstitial mass transfer applied to nanoparticles; and (iv) coupled capillary and interstitial heat transfer, which are the fundamental mechanisms governing nano-based hyperthermic treatment. This is an improvement with respect to previous modelling approaches, where the effect of blood perfusion on heat transfer is modelled in a spatially averaged form. We analyse the time evolution and the spatial distribution of particles and temperature in a tumour mass treated with superparamagnetic nanoparticles excited by an alternating magnetic field. By means of numerical experiments, we synthesize scaling laws that illustrate how nano-based hyperthermia depends on tumour size and vascularity. In particular, we identify two distinct mechanisms that regulate the distribution of particle and temperature, which are characterized by perfusion and diffusion, respectively.

  20. Model-based quality assessment and base-calling for second-generation sequencing data.

    Science.gov (United States)

    Bravo, Héctor Corrada; Irizarry, Rafael A

    2010-09-01

    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads-strings of A,C,G, or T's, between 30 and 100 characters long-which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in

  1. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    Science.gov (United States)

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  2. Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Lin Min; Chen Tianlun

    2005-01-01

    Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.

  3. A physiologically based kinetic model for bacterial sulfide oxidation.

    Science.gov (United States)

    Klok, Johannes B M; de Graaff, Marco; van den Bosch, Pim L F; Boelee, Nadine C; Keesman, Karel J; Janssen, Albert J H

    2013-02-01

    In the biotechnological process for hydrogen sulfide removal from gas streams, a variety of oxidation products can be formed. Under natron-alkaline conditions, sulfide is oxidized by haloalkaliphilic sulfide oxidizing bacteria via flavocytochrome c oxidoreductase. From previous studies, it was concluded that the oxidation-reduction state of cytochrome c is a direct measure for the bacterial end-product formation. Given this physiological feature, incorporation of the oxidation state of cytochrome c in a mathematical model for the bacterial oxidation kinetics will yield a physiologically based model structure. This paper presents a physiologically based model, describing the dynamic formation of the various end-products in the biodesulfurization process. It consists of three elements: 1) Michaelis-Menten kinetics combined with 2) a cytochrome c driven mechanism describing 3) the rate determining enzymes of the respiratory system of haloalkaliphilic sulfide oxidizing bacteria. The proposed model is successfully validated against independent data obtained from biological respiration tests and bench scale gas-lift reactor experiments. The results demonstrate that the model is a powerful tool to describe product formation for haloalkaliphilic biomass under dynamic conditions. The model predicts a maximum S⁰ formation of about 98 mol%. A future challenge is the optimization of this bioprocess by improving the dissolved oxygen control strategy and reactor design. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Agent-based modeling of the energy network for hybrid cars

    International Nuclear Information System (INIS)

    Gonzalez de Durana, José María; Barambones, Oscar; Kremers, Enrique; Varga, Liz

    2015-01-01

    Highlights: • An approach to represent and calculate multicarrier energy networks has been developed. • It provides a modeling method based on agents, for multicarrier energy networks. • It allows the system representation on a single sheet. • Energy flows circulating in the system can be observed dynamically during simulation. • The method is technology independent. - Abstract: Studies in complex energy networks devoted to the modeling of electrical power grids, were extended in previous work, where a computational multi-layered ontology, implemented using agent-based methods, was adopted. This structure is compatible with recently introduced Multiplex Networks which using Multi-linear Algebra generalize some of classical results for single-layer networks, to multilayer networks in steady state. Static results do not assist overly in understanding dynamic networks in which the values of the variables in the nodes and edges can change suddenly, driven by events, and even where new nodes or edges may appear or disappear, also because of other events. To address this gap, a computational agent-based model is developed to extend the multi-layer and multiplex approaches. In order to demonstrate the benefits of a dynamical extension, a model of the energy network in a hybrid car is presented as a case study

  5. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  6. Hot News Recommendation System from Heterogeneous Websites Based on Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhengyou Xia

    2014-01-01

    Full Text Available The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs. In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  7. A novel pH-responsive hydrogel-based on calcium alginate engineered by the previous formation of polyelectrolyte complexes (PECs) intended to vaginal administration.

    Science.gov (United States)

    Ferreira, Natália Noronha; Perez, Taciane Alvarenga; Pedreiro, Liliane Neves; Prezotti, Fabíola Garavello; Boni, Fernanda Isadora; Cardoso, Valéria Maria de Oliveira; Venâncio, Tiago; Gremião, Maria Palmira Daflon

    2017-10-01

    This work aimed to develop a calcium alginate hydrogel as a pH responsive delivery system for polymyxin B (PMX) sustained-release through the vaginal route. Two samples of sodium alginate from different suppliers were characterized. The molecular weight and M/G ratio determined were, approximately, 107 KDa and 1.93 for alginate_S and 32 KDa and 1.36 for alginate_V. Polymer rheological investigations were further performed through the preparation of hydrogels. Alginate_V was selected for subsequent incorporation of PMX due to the acquisition of pseudoplastic viscous system able to acquiring a differential structure in simulated vaginal microenvironment (pH 4.5). The PMX-loaded hydrogel (hydrogel_PMX) was engineered based on polyelectrolyte complexes (PECs) formation between alginate and PMX followed by crosslinking with calcium chloride. This system exhibited a morphology with variable pore sizes, ranging from 100 to 200 μm and adequate syringeability. The hydrogel liquid uptake ability in an acid environment was minimized by the previous PECs formation. In vitro tests evidenced the hydrogels mucoadhesiveness. PMX release was pH-dependent and the system was able to sustain the release up to 6 days. A burst release was observed at pH 7.4 and drug release was driven by an anomalous transport, as determined by the Korsmeyer-Peppas model. At pH 4.5, drug release correlated with Weibull model and drug transport was driven by Fickian diffusion. The calcium alginate hydrogels engineered by the previous formation of PECs showed to be a promising platform for sustained release of cationic drugs through vaginal administration.

  8. Clear-sky classification procedures and models using a world-wide data-base

    International Nuclear Information System (INIS)

    Younes, S.; Muneer, T.

    2007-01-01

    Clear-sky data need to be extracted from all-sky measured solar-irradiance dataset, often by using algorithms that rely on other measured meteorological parameters. Current procedures for clear-sky data extraction have been examined and compared with each other to determine their reliability and location dependency. New clear-sky determination algorithms are proposed that are based on a combination of clearness index, diffuse ratio, cloud cover and Linke's turbidity limits. Various researchers have proposed clear-sky irradiance models that rely on synoptic parameters; four of these models, MRM, PRM, YRM and REST2 have been compared for six world-wide-locations. Based on a previously-developed comprehensive accuracy scoring method, the models MRM, REST2 and YRM were found to be of satisfactory performance in decreasing order. The so-called Page radiation model (PRM) was found to underestimate solar radiation, even though local turbidity data were provided for its operation

  9. Community Based Educational Model on Water Conservation Program

    Science.gov (United States)

    Sudiajeng, L.; Parwita, I. G. L.; Wiraga, I. W.; Mudhina, M.

    2018-01-01

    The previous research showed that there were indicators of water crisis in the northern and eastern part of Denpasar city and most of coastal area experienced on seawater intrusion. The recommended water conservation programs were rainwater harvesting and educate the community to develop a water saving and environmentally conscious culture. This research was conducted to built the community based educational model on water conservation program through ergonomics SHIP approach which placed the human aspect as the first consideration, besides the economic and technically aspects. The stakeholders involved in the program started from the problem analyses to the implementation and the maintenance as well. The model was built through three main steps, included determination of accepted design; building the recharge wells by involving local communities; guidance and assistance in developing a water saving and environmentally conscious culture for early childhood, elementary and junior high school students, community and industry. The program was implemented based on the “TRIHITA KARANA” concept, which means the relationship between human to God, human-to-human, and human to environment. Through the development of the model, it is expected to grow a sense of belonging and awareness from the community to maintain the sustainability of the program.

  10. Tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks

    Directory of Open Access Journals (Sweden)

    Alexandru Topirceanu

    2016-01-01

    Full Text Available One of the main motivations behind social network analysis is the quest for understanding opinion formation and diffusion. Previous models have limitations, as they typically assume opinion interaction mechanisms based on thresholds which are either fixed or evolve according to a random process that is external to the social agent. Indeed, our empirical analysis on large real-world datasets such as Twitter, Meme Tracker, and Yelp, uncovers previously unaccounted for dynamic phenomena at population-level, namely the existence of distinct opinion formation phases and social balancing. We also reveal that a phase transition from an erratic behavior to social balancing can be triggered by network topology and by the ratio of opinion sources. Consequently, in order to build a model that properly accounts for these phenomena, we propose a new (individual-level opinion interaction model based on tolerance. As opposed to the existing opinion interaction models, the new tolerance model assumes that individual’s inner willingness to accept new opinions evolves over time according to basic human traits. Finally, by employing discrete event simulation on diverse social network topologies, we validate our opinion interaction model and show that, although the network size and opinion source ratio are important, the phase transition to social balancing is mainly fostered by the democratic structure of the small-world topology.

  11. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    Science.gov (United States)

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  12. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  13. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  14. A model for fine mapping in family based association studies.

    Science.gov (United States)

    Boehringer, Stefan; Pfeiffer, Ruth M

    2009-01-01

    Genome wide association studies for complex diseases are typically followed by more focused characterization of the identified genetic region. We propose a latent class model to evaluate a candidate region with several measured markers using observations on families. The main goal is to estimate linkage disequilibrium (LD) between the observed markers and the putative true but unobserved disease locus in the region. Based on this model, we estimate the joint distribution of alleles at the observed markers and the unobserved true disease locus, and a penetrance parameter measuring the impact of the disease allele on disease risk. A family specific random effect allows for varying baseline disease prevalences for different families. We present a likelihood framework for our model and assess its properties in simulations. We apply the model to an Alzheimer data set and confirm previous findings in the ApoE region.

  15. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  16. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  17. Repulsion-based model for contact angle saturation in electrowetting.

    Science.gov (United States)

    Ali, Hassan Abdelmoumen Abdellah; Mohamed, Hany Ahmed; Abdelgawad, Mohamed

    2015-01-01

    We introduce a new model for contact angle saturation phenomenon in electrowetting on dielectric systems. This new model attributes contact angle saturation to repulsion between trapped charges on the cap and base surfaces of the droplet in the vicinity of the three-phase contact line, which prevents these surfaces from converging during contact angle reduction. This repulsion-based saturation is similar to repulsion between charges accumulated on the surfaces of conducting droplets which causes the well known Coulombic fission and Taylor cone formation phenomena. In our model, both the droplet and dielectric coating were treated as lossy dielectric media (i.e., having finite electrical conductivities and permittivities) contrary to the more common assumption of a perfectly conducting droplet and perfectly insulating dielectric. We used theoretical analysis and numerical simulations to find actual charge distribution on droplet surface, calculate repulsion energy, and minimize energy of the total system as a function of droplet contact angle. Resulting saturation curves were in good agreement with previously reported experimental results. We used this proposed model to predict effect of changing liquid properties, such as electrical conductivity, and system parameters, such as thickness of the dielectric layer, on the saturation angle, which also matched experimental results.

  18. Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior

    Science.gov (United States)

    Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio

    2017-12-01

    In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.

  19. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    Science.gov (United States)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  20. A turbulent time scale based k–ε model for probability density function modeling of turbulence/chemistry interactions: Application to HCCI combustion

    International Nuclear Information System (INIS)

    Maroteaux, Fadila; Pommier, Pierre-Lin

    2013-01-01

    Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale

  1. Analytical modeling of a sandwiched plate piezoelectric transformer-based acoustic-electric transmission channel.

    Science.gov (United States)

    Lawry, Tristan J; Wilt, Kyle R; Scarton, Henry A; Saulnier, Gary J

    2012-11-01

    The linear propagation of electromagnetic and dilatational waves through a sandwiched plate piezoelectric transformer (SPPT)-based acoustic-electric transmission channel is modeled using the transfer matrix method with mixed-domain two-port ABCD parameters. This SPPT structure is of great interest because it has been explored in recent years as a mechanism for wireless transmission of electrical signals through solid metallic barriers using ultrasound. The model we present is developed to allow for accurate channel performance prediction while greatly reducing the computational complexity associated with 2- and 3-dimensional finite element analysis. As a result, the model primarily considers 1-dimensional wave propagation; however, approximate solutions for higher-dimensional phenomena (e.g., diffraction in the SPPT's metallic core layer) are also incorporated. The model is then assessed by comparing it to the measured wideband frequency response of a physical SPPT-based channel from our previous work. Very strong agreement between the modeled and measured data is observed, confirming the accuracy and utility of the presented model.

  2. Derivation of the mean annual water-energy balance model based on an Ohms-type law

    Science.gov (United States)

    Li, X.; Shan, X.; Yang, H.

    2017-12-01

    The Budyko Hypothesis is used to describe the water partition and energy partition. Many empirical and analytical solutions have been proposed to evaluate the general solution which can be described as E/P = F(E0/P, c), where c is a parameter. And previous studies have given a derivation of Mezentsev-Choudhruy-Yang (MCY) model, based on dimensional analysis and mathematic reasoning, however, little hydrological process. Thus further hydrological meaning is limited to the boundary conditions which are difficult to explore. Note that hydrologic cycle is always forced by the energy conversions and atmospheric transportation, and the parallel in the electric circuits and the atmospheric motions, therefore we try to give a new derivation of MCY model from a conceptual model, considering hydrologic fluxes and atmospheric motions. Here an analogy of Ohms Law and the atmospheric cycle is used to aim at describing the partition of water in a long-term timescale. Then MCY model is derived in a new form, which is based on more physical explanation than mathematic reasoning proposed in previous studies. The implications of this derivation are also explored.

  3. Agent-Based and Macroscopic Modeling of the Complex Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Aleksejus Kononovičius

    2013-08-01

    policy making tools in order to cope with the social transformations in the contemporary society. Originality/Value – The relationship between the inter-individual and the collective behavior is an interesting topic considered to be coming from rather different fields by many scientists. Yet the topic has received due attention only in the recent years. Consequently, the truly systematic approaches directly bridging between these two concepts are somewhat rare. These approaches also differ among themselves – some of the research groups consider questionnaires to understand the individual incentives of the humans, some suggest varying applications of the known physical models and some have roots in the behavioral economics and utility optimization. Our approach in this sense is unique as we start from a simple agentbased herding model and use the ideas from the statistical physics to obtain its macroscopic treatments for the different socio-economic scenarios. In this contribution we present our previous approaches, namely considering new product diffusion in the market and also a financial market model, and also our most recent results, related to the leadership in the social communities and predator-prey type competition in the socio-economic systems. To the best of the authors’ knowledge, the correspondence between the considered simple agent-based herding model and the considered macroscopic models was not previously discussed by the other research groups. Research type: research paper.

  4. 75 FR 57844 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2010-09-23

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39-16438. Docket No. FAA-2010-0555... (Type Certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200...

  5. 77 FR 64767 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2012-10-23

    ... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Airplanes AGENCY... airworthiness directive (AD) for certain Gulfstream Aerospace LP (Type Certificate previously held by Israel... Certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200 airplanes...

  6. 78 FR 11567 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2013-02-19

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Model Gulfstream G150... Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39...

  7. 76 FR 70040 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2011-11-10

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (type certificate previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and... new AD: 2011-23-07 Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft...

  8. Low-lying qq(qq)-bar states in a relativistic model based on the Bethe-Salpeter equation

    International Nuclear Information System (INIS)

    Ram, B.; Kriss, V.

    1985-01-01

    Low-lying qq(qq)-bar states are analysed in a previously given relativistic model based on the Bethe-Salpeter equation. It is not got M-diquonia, P-mesonia, or meson molecules, but it is got T-diquonia

  9. Nitrous Oxide Production in a Granule-based Partial Nitritation Reactor: A Model-based Evaluation.

    Science.gov (United States)

    Peng, Lai; Sun, Jing; Liu, Yiwen; Dai, Xiaohu; Ni, Bing-Jie

    2017-04-03

    Sustainable wastewater treatment has been attracting increasing attentions over the past decades. However, the production of nitrous oxide (N 2 O), a potent GHG, from the energy-efficient granule-based autotrophic nitrogen removal is largely unknown. This study applied a previously established N 2 O model, which incorporated two N 2 O production pathways by ammonia-oxidizing bacteria (AOB) (AOB denitrification and the hydroxylamine (NH 2 OH) oxidation). The two-pathway model was used to describe N 2 O production from a granule-based partial nitritation (PN) reactor and provide insights into the N 2 O distribution inside granules. The model was evaluated by comparing simulation results with N 2 O monitoring profiles as well as isotopic measurement data from the PN reactor. The model demonstrated its good predictive ability against N 2 O dynamics and provided useful information about the shift of N 2 O production pathways inside granules for the first time. The simulation results indicated that the increase of oxygen concentration and granule size would significantly enhance N 2 O production. The results further revealed a linear relationship between N 2 O production and ammonia oxidation rate (AOR) (R 2  = 0.99) under the conditions of varying oxygen levels and granule diameters, suggesting that bulk oxygen and granule size may exert an indirect effect on N 2 O production by causing a change in AOR.

  10. Five criteria for using a surrogate endpoint to predict treatment effect based on data from multiple previous trials.

    Science.gov (United States)

    Baker, Stuart G

    2018-02-20

    A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  11. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  12. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  13. Volume-based geometric modeling for radiation transport calculations

    International Nuclear Information System (INIS)

    Li, Z.; Williamson, J.F.

    1992-01-01

    Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed

  14. Goal-Based Domain Modeling as a Basis for Cross-Disciplinary Systems Engineering

    Science.gov (United States)

    Jarke, Matthias; Nissen, Hans W.; Rose, Thomas; Schmitz, Dominik

    Small and medium-sized enterprises (SMEs) are important drivers for innovation. In particular, project-driven SMEs that closely cooperate with their customers have specific needs in regard to information engineering of their development process. They need a fast requirements capture since this is most often included in the (unpaid) offer development phase. At the same time, they need to maintain and reuse the knowledge and experiences they have gathered in previous projects extensively as it is their core asset. The situation is complicated further if the application field crosses disciplinary boundaries. To bridge the gaps and perspectives, we focus on shared goals and dependencies captured in models at a conceptual level. Such a model-based approach also offers a smarter connection to subsequent development stages, including a high share of automated code generation. In the approach presented here, the agent- and goal-oriented formalism i * is therefore extended by domain models to facilitate information organization. This extension permits a domain model-based similarity search, and a model-based transformation towards subsequent development stages. Our approach also addresses the evolution of domain models reflecting the experiences from completed projects. The approach is illustrated with a case study on software-intensive control systems in an SME of the automotive domain.

  15. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  16. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  17. Obstructive pulmonary disease in patients with previous tuberculosis ...

    African Journals Online (AJOL)

    Obstructive pulmonary disease in patients with previous tuberculosis: Pathophysiology of a community-based cohort. B.W. Allwood, R Gillespie, M Galperin-Aizenberg, M Bateman, H Olckers, L Taborda-Barata, G.L. Calligaro, Q Said-Hartley, R van Zyl-Smit, C.B. Cooper, E van Rikxoort, J Goldin, N Beyers, E.D. Bateman ...

  18. Study of the attractor structure of an agent-based sociological model

    Energy Technology Data Exchange (ETDEWEB)

    Timpanaro, Andre M; Prado, Carmen P C, E-mail: timpa@if.usp.br, E-mail: prado@if.usp.br [Instituto de Fisica da Universidade de Sao Paulo, Sao Paulo (Brazil)

    2011-03-01

    The Sznajd model is a sociophysics model that is based in the Potts model, and used for describing opinion propagation in a society. It employs an agent-based approach and interaction rules favouring pairs of agreeing agents. It has been successfully employed in modeling some properties and scale features of both proportional and majority elections (see for instance the works of A. T. Bernardes and R. N. Costa Filho), but its stationary states are always consensus states. In order to explain more complicated behaviours, we have modified the bounded confidence idea (introduced before in other opinion models, like the Deffuant model), with the introduction of prejudices and biases (we called this modification confidence rules), and have adapted it to the discrete Sznajd model. This generalized Sznajd model is able to reproduce almost all of the previous versions of the Sznajd model, by using appropriate choices of parameters. We solved the attractor structure of the resulting model in a mean-field approach and made Monte Carlo simulations in a Barabasi-Albert network. These simulations show great similarities with the mean-field, for the tested cases of 3 and 4 opinions. The dynamical systems approach that we devised allows for a deeper understanding of the potential of the Sznajd model as an opinion propagation model and can be easily extended to other models, like the voter model. Our modification of the bounded confidence rule can also be readily applied to other opinion propagation models.

  19. Dynamic model based on voltage transfer curve for pattern formation in dielectric barrier glow discharge

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ben; He, Feng; Ouyang, Jiting, E-mail: jtouyang@bit.edu.cn [School of Physics, Beijing Institute of Technology, Beijing 100081 (China); Duan, Xiaoxi [Research Center of Laser Fusion, CAEP, Mianyang 621900 (China)

    2015-12-15

    Simulation work is very important for understanding the formation of self-organized discharge patterns. Previous works have witnessed different models derived from other systems for simulation of discharge pattern, but most of these models are complicated and time-consuming. In this paper, we introduce a convenient phenomenological dynamic model based on the basic dynamic process of glow discharge and the voltage transfer curve (VTC) to study the dielectric barrier glow discharge (DBGD) pattern. VTC is an important characteristic of DBGD, which plots the change of wall voltage after a discharge as a function of the initial total gap voltage. In the modeling, the combined effect of the discharge conditions is included in VTC, and the activation-inhibition effect is expressed by a spatial interaction term. Besides, the model reduces the dimensionality of the system by just considering the integration effect of current flow. All these greatly facilitate the construction of this model. Numerical simulations turn out to be in good accordance with our previous fluid modeling and experimental result.

  20. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  1. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  2. Vibration Based Diagnosis for Planetary Gearboxes Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Liu Hong

    2016-01-01

    Full Text Available The application of conventional vibration based diagnostic techniques to planetary gearboxes is a challenge because of the complexity of frequency components in the measured spectrum, which is the result of relative motions between the rotary planets and the fixed accelerometer. In practice, since the fault signatures are usually contaminated by noises and vibrations from other mechanical components of gearboxes, the diagnostic efficacy may further deteriorate. Thus, it is essential to develop a novel vibration based scheme to diagnose gear failures for planetary gearboxes. Following a brief literature review, the paper begins with the introduction of an analytical model of planetary gear-sets developed by the authors in previous works, which can predict the distinct behaviors of fault introduced sidebands. This analytical model is easy to implement because the only prerequisite information is the basic geometry of the planetary gear-set. Afterwards, an automated diagnostic scheme is proposed to cope with the challenges associated with the characteristic configuration of planetary gearboxes. The proposed vibration based scheme integrates the analytical model, a denoising algorithm, and frequency domain indicators into one synergistic system for the detection and identification of damaged gear teeth in planetary gearboxes. Its performance is validated with the dynamic simulations and the experimental data from a planetary gearbox test rig.

  3. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  4. Habitat-Based Density Models for Three Cetacean Species off Southern California Illustrate Pronounced Seasonal Differences

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2017-05-01

    Full Text Available Managing marine species effectively requires spatially and temporally explicit knowledge of their density and distribution. Habitat-based density models, a type of species distribution model (SDM that uses habitat covariates to estimate species density and distribution patterns, are increasingly used for marine management and conservation because they provide a tool for assessing potential impacts (e.g., from fishery bycatch, ship strikes, anthropogenic sound over a variety of spatial and temporal scales. The abundance and distribution of many pelagic species exhibit substantial seasonal variability, highlighting the importance of predicting density specific to the season of interest. This is particularly true in dynamic regions like the California Current, where significant seasonal shifts in cetacean distribution have been documented at coarse scales. Finer scale (10 km habitat-based density models were previously developed for many cetacean species occurring in this region, but most models were limited to summer/fall. The objectives of our study were two-fold: (1 develop spatially-explicit density estimates for winter/spring to support management applications, and (2 compare model-predicted density and distribution patterns to previously developed summer/fall model results in the context of species ecology. We used a well-established Generalized Additive Modeling framework to develop cetacean SDMs based on 20 California Cooperative Oceanic Fisheries Investigations (CalCOFI shipboard surveys conducted during winter and spring between 2005 and 2015. Models were fit for short-beaked common dolphin (Delphinus delphis delphis, Dall's porpoise (Phocoenoides dalli, and humpback whale (Megaptera novaeangliae. Model performance was evaluated based on a variety of established metrics, including the percentage of explained deviance, ratios of observed to predicted density, and visual inspection of predicted and observed distributions. Final models were

  5. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach.

    Science.gov (United States)

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R

    2015-09-15

    Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.

  6. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan; Bi, Linfeng; Popov, Peter; Efendiev, Yalchin; Espedal, Magne

    2010-01-01

    , fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling

  7. Structured prediction models for RNN based sequence labeling in clinical text.

    Science.gov (United States)

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  8. Late preterm birth and previous cesarean section: a population-based cohort study.

    Science.gov (United States)

    Yasseen Iii, Abdool S; Bassil, Kate; Sprague, Ann; Urquia, Marcelo; Maguire, Jonathon L

    2018-02-21

    Late preterm birth (LPB) is increasingly common and associated with higher morbidity and mortality than term birth. Yet, little is known about the influence of previous cesarean section (PCS) and the occurrence of LPB in subsequent pregnancies. We aim to evaluate this association along with the potential mediation by cesarean sections in the current pregnancy. We use population-based birth registry data (2005-2012) to establish a cohort of live born singleton infants born between 34 and 41 gestational weeks to multiparous mothers. PCS was the primary exposure, LPB (34-36 weeks) was the primary outcome, and an unplanned or emergency cesarean section in the current pregnancy was the potential mediator. Associations were quantified using propensity weighted multivariable Poisson regression, and mediating associations were explored using the Baron-Kenny approach. The cohort included 481,531 births, 21,893 (4.5%) were LPB, and 119,983 (24.9%) were predated by at least one PCS. Among mothers with at least one PCS, 6307 (5.26%) were LPB. There was increased risk of LPB among women with at least one PCS (adjusted Relative Risk (aRR): 1.20 (95%CI [1.16, 1.23]). Unplanned or emergency cesarean section in the current pregnancy was identified as a strong mediator to this relationship (mediation ratio = 97%). PCS was associated with higher risk of LPB in subsequent pregnancies. This may be due to an increased risk of subsequent unplanned or emergency preterm cesarean sections. Efforts to minimize index cesarean sections may reduce the risk of LPB in subsequent pregnancies.

  9. Satellite Based Downward Long Wave Radiation by Various Models in Northeast Asia

    Directory of Open Access Journals (Sweden)

    Chanyang Sur

    2014-01-01

    Full Text Available Satellite-based downward long wave radiation measurement under clear sky conditions in Northeast Asia was conducted using five well-known physical models (Brunt 1932, Idso and Jackson 1969, Brutsaert 1975, Satterlund 1979, Prata 1996 with a newly proposed global Rld model (Abramowitz et al. 2012. Data from two flux towers in South Korea were used to validate downward long wave radiation. Moderate resolution imaging spectroradiometer (MODIS atmospheric profile products were used to develop the Rld models. The overall root mean square error (RMSE of MODIS Rld with respect to two ecosystem-type flux towers was determined to be ≈ 20 W m-2. Based on the statistical analyses, MODIS Rld estimates with Brutsaert (1975 and Abramowitz et al. (2012 models were the most applicable for evaluating Rld for clear sky conditions in Northeast Asia. The Abramowitz Rld maps with MODIS Ta and ea showed reasonable seasonal patterns, which were well-aligned with other biophysical variables reported by previous studies. The MODIS Rld map developed in this study will be very useful for identifying spatial patterns that are not detectable from ground-based Rld measurement sites.

  10. A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.

  11. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  12. Developing an agent-based model on how different individuals solve complex problems

    Directory of Open Access Journals (Sweden)

    Ipek Bozkurt

    2015-01-01

    Full Text Available Purpose: Research that focuses on the emotional, mental, behavioral and cognitive capabilities of individuals has been abundant within disciplines such as psychology, sociology, and anthropology, among others. However, when facing complex problems, a new perspective to understand individuals is necessary. The main purpose of this paper is to develop an agent-based model and simulation to gain understanding on the decision-making and problem-solving abilities of individuals. Design/Methodology/approach: The micro-level analysis modeling and simulation paradigm Agent-Based Modeling Through the use of Agent-Based Modeling, insight is gained on how different individuals with different profiles deal with complex problems. Using previous literature from different bodies of knowledge, established theories and certain assumptions as input parameters, a model is built and executed through a computer simulation. Findings: The results indicate that individuals with certain profiles have better capabilities to deal with complex problems. Moderate profiles could solve the entire complex problem, whereas profiles within extreme conditions could not. This indicates that having a strong predisposition is not the ideal way when approaching complex problems, and there should always be a component from the other perspective. The probability that an individual may use these capabilities provided by the opposite predisposition provides to be a useful option. Originality/value: The originality of the present research stems from how individuals are profiled, and the model and simulation that is built to understand how they solve complex problems. The development of the agent-based model adds value to the existing body of knowledge within both social sciences, and modeling and simulation.

  13. A modification of a previous model fo r inflammatory tooth pain: Effects of different capsaicin and formalin concentrations and ibuprofen

    Directory of Open Access Journals (Sweden)

    Maryam Raoof DDS, MS

    2012-09-01

    Full Text Available BACKGROUND AND AIM:This study aimed to solve the problems faced with the previous model of inflammatory tooth painin rats.METHODS:After cutting 2 mm of the distal extremities, the polyethylene crownswere placed on the mandibularincisors. In contrast to the original model, we used flow composite instead of wire in order to maximize the retention ofcrowns. Different concentrations of capsaicin (10, 25 and 100 mg/ml and formalin were administrated into the cavitiesunder the crowns. The algesic agent-induced behaviors were evaluated.RESULTS:The modified model had no liquid leakage. Furthermore, composite allowed the crowns to remain for alonger period of time. Capsaicin 25, 100 mg/ml and formalin applications induced significantly more painfulstimulation compared with control groups (P < 0.001. These responses were significantly reduced by theadministration of ibuprofen, 20 minutes prior to the capsaicin 100 mg/ml injection.CONCLUSIONS:This model seems to be adequate for long-term pain related experiments in which fluid leakageelimination is important.

  14. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  15. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    Science.gov (United States)

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  16. Agile Model Driven Development of Electronic Health Record-Based Specialty Population Registries

    Science.gov (United States)

    Kannan, Vaishnavi; Fish, Jason C.; Willett, DuWayne L.

    2018-01-01

    The transformation of the American healthcare payment system from fee-for-service to value-based care increasingly makes it valuable to develop patient registries for specialized populations, to better assess healthcare quality and costs. Recent widespread adoption of Electronic Health Records (EHRs) in the U.S. now makes possible construction of EHR-based specialty registry data collection tools and reports, previously unfeasible using manual chart abstraction. But the complexities of specialty registry EHR tools and measures, along with the variety of stakeholders involved, can result in misunderstood requirements and frequent product change requests, as users first experience the tools in their actual clinical workflows. Such requirements churn could easily stall progress in specialty registry rollout. Modeling a system’s requirements and solution design can be a powerful way to remove ambiguities, facilitate shared understanding, and help evolve a design to meet newly-discovered needs. “Agile Modeling” retains these values while avoiding excessive unused up-front modeling in favor of iterative incremental modeling. Using Agile Modeling principles and practices, in calendar year 2015 one institution developed 58 EHR-based specialty registries, with 111 new data collection tools, supporting 134 clinical process and outcome measures, and enrolling over 16,000 patients. The subset of UML and non-UML models found most consistently useful in designing, building, and iteratively evolving EHR-based specialty registries included User Stories, Domain Models, Use Case Diagrams, Decision Trees, Graphical User Interface Storyboards, Use Case text descriptions, and Solution Class Diagrams. PMID:29750222

  17. Fuzzy Case-Based Reasoning in Product Style Acquisition Incorporating Valence-Arousal-Based Emotional Cellular Model

    Directory of Open Access Journals (Sweden)

    Fuqian Shi

    2012-01-01

    Full Text Available Emotional cellular (EC, proposed in our previous works, is a kind of semantic cell that contains kernel and shell and the kernel is formalized by a triple- L = , where P denotes a typical set of positive examples relative to word-L, d is a pseudodistance measure on emotional two-dimensional space: valence-arousal, and δ is a probability density function on positive real number field. The basic idea of EC model is to assume that the neighborhood radius of each semantic concept is uncertain, and this uncertainty will be measured by one-dimensional density function δ. In this paper, product form features were evaluated by using ECs and to establish the product style database, fuzzy case based reasoning (FCBR model under a defined similarity measurement based on fuzzy nearest neighbors (FNN incorporating EC was applied to extract product styles. A mathematical formalized inference system for product style was also proposed, and it also includes uncertainty measurement tool emotional cellular. A case study of style acquisition of mobile phones illustrated the effectiveness of the proposed methodology.

  18. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  19. Ignition and Growth Modeling of Detonating LX-04 (85% HMX / 15% VITON) Using New and Previously Obtained Experimental Data

    Science.gov (United States)

    Tarver, Craig

    2017-06-01

    An Ignition and Growth reactive flow model for detonating LX-04 (85% HMX / 15% Viton) was developed using new and previously obtained experimental data on: cylinder test expansion; wave curvature; failure diameter; and laser interferometric copper and tantalum foil free surface velocities and LiF interface particle velocity histories. A reaction product JWL EOS generated by the CHEETAH code compared favorably with the existing, well normalized LX-04 product JWL when both were used with the Ignition and Growth model. Good agreement with all existing experimental data was obtained. Keywords: LX-04, HMX, detonation, Ignition and Growth PACS:82.33.Vx, 82.40.Fp This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  20. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  1. Machine Learning-based discovery of closures for reduced models of dynamical systems

    Science.gov (United States)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  2. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  3. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  4. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A unified elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Lin, Y.C.; Li, Kuo-Kuo; Chen, Jian

    2016-01-01

    In authors' previous work (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0371-6, 2016), the nonlinear unloading behavior of a typical Ni-based superalloy was investigated by hot compressive experiments with intermediate unloading-reloading cycles. The characters of unloading curves were discussed in detail, and a new elasto-viscoplastic constitutive model was proposed to describe the nonlinear unloading behavior of the studied Ni-based superalloy. Still, the functional relationships between the deformation temperature, strain rate, pre-strain and the parameters of the proposed constitutive model need to be established. In this study, the effects of deformation temperature, strain rate and pre-strain on the parameters of the new constitutive model proposed in authors' previous work (Chen et al. 2016) are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate and pre-strain. (orig.)

  5. Consumer Decision-Making Styles Extension to Trust-Based Product Comparison Site Usage Model

    Directory of Open Access Journals (Sweden)

    Radoslaw Macik

    2016-09-01

    Full Text Available The paper describes an implementation of extended consumer decision-making styles concept in explaining consumer choices made in product comparison site environment in the context of trust-based information technology acceptance model. Previous research proved that trust-based acceptance model is useful in explaining purchase intention and anticipated satisfaction in product comparison site environment, as an example of online decision shopping aids. Trust to such aids is important in explaining their usage by consumers. The connections between consumer decision-making styles, product and sellers opinions usage, cognitive and affective trust toward online product comparison site, as well as choice outcomes (purchase intention and brand choice are explored trough structural equation models using PLS-SEM approach, using a sample of 461 young consumers. Research confirmed the validity of research model in explaining product comparison usage, and some consumer decision-making styles influenced consumers’ choices and purchase intention. Product and sellers reviews usage were partially mediating mentioned relationships.

  6. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  7. Discrete event model-based simulation for train movement on a single-line railway

    International Nuclear Information System (INIS)

    Xu Xiao-Ming; Li Ke-Ping; Yang Li-Xing

    2014-01-01

    The aim of this paper is to present a discrete event model-based approach to simulate train movement with the considered energy-saving factor. We conduct extensive case studies to show the dynamic characteristics of the traffic flow and demonstrate the effectiveness of the proposed approach. The simulation results indicate that the proposed discrete event model-based simulation approach is suitable for characterizing the movements of a group of trains on a single railway line with less iterations and CPU time. Additionally, some other qualitative and quantitative characteristics are investigated. In particular, because of the cumulative influence from the previous trains, the following trains should be accelerated or braked frequently to control the headway distance, leading to more energy consumption. (general)

  8. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  9. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    Science.gov (United States)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  10. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  11. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  12. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  13. The impact of design-based modeling instruction on seventh graders' spatial abilities and model-based argumentation

    Science.gov (United States)

    McConnell, William J.

    Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed

  14. EPR-based material modelling of soils

    Science.gov (United States)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  15. Issues in practical model-based diagnosis

    NARCIS (Netherlands)

    Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.

    1993-01-01

    The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of

  16. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  17. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  18. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO

    Science.gov (United States)

    Hamad, I. J.; Manuel, L. O.; Aligia, A. A.

    2018-04-01

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  19. Application of recently developed elliptic blending based models to separated flows

    International Nuclear Information System (INIS)

    Billard, F.; Revell, A.; Craft, T.

    2012-01-01

    Highlights: ► The study focuses on elliptic blending near-wall models. ► Models are compared on 2- and 3-dimensional separating flows. ► Conclusions are ambiguous on 2-d flows. ► Predictive superiority of Reynolds stress models over eddy viscosity model appear on 3-d flows. - Abstract: This paper considers the application of four Reynolds-Averaged Navier Stokes (RANS) models to a range of progressively complex test cases, exhibiting both 2-d and 3-d flow separation. Two Eddy Viscosity Models (EVM) and two Reynolds Stress Transport Models (RSM) are employed, of which two (one in each category) are based on elliptic blending formulations. By both reviewing the conclusions of previous studies, and from the present calculations, this study aims at gaining more insight into the importance of two modelling features for these flows: the usage of turbulence anisotropy resolving schemes, and the near-wall limiting behaviour. In general the anisotropy and near wall treatment offered by both elliptic blending models is observed to offer some improvement over other models tested, although this is not always the case for the 2-d flows, where (as ever) a single “best candidate” model does not emerge.

  20. Location-based Mobile Relay Selection and Impact of Inaccurate Path Loss Model Parameters

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2010-01-01

    In this paper we propose a relay selection scheme which uses collected location information together with a path loss model for relay selection, and analyze the performance impact of mobility and different error causes on this scheme. Performance is evaluated in terms of bit error rate...... by simulations. The SNR measurement based relay selection scheme proposed previously is unsuitable for use with fast moving users in e.g. vehicular scenarios due to a large signaling overhead. The proposed location based scheme is shown to work well with fast moving users due to a lower signaling overhead...... in these situations. As the location-based scheme relies on a path loss model to estimate link qualities and select relays, the sensitivity with respect to inaccurate estimates of the unknown path loss model parameters is investigated. The parameter ranges that result in useful performance were found...

  1. A voxel-based finite element model for the prediction of bladder deformation

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan [Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands); Radiation Oncology Department, Netherlands Cancer Institute, 1066 CX Amsterdam (Netherlands); Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands)

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  2. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  3. Improved high-frequency equivalent circuit model based on distributed effects for SiGe HBTs with CBE layout

    International Nuclear Information System (INIS)

    Sun Ya-Bin; Li Xiao-Jin; Zhang Jin-Zhong; Shi Yan-Ling

    2017-01-01

    In this paper, we present an improved high-frequency equivalent circuit for SiGe heterojunction bipolar transistors (HBTs) with a CBE layout, where we consider the distributed effects along the base region. The actual device structure is divided into three parts: a link base region under a spacer oxide, an intrinsic transistor region under the emitter window, and an extrinsic base region. Each region is considered as a two-port network, and is composed of a distributed resistance and capacitance. We solve the admittance parameters by solving the transmission-line equation. Then, we obtain the small-signal equivalent circuit depending on the reasonable approximations. Unlike previous compact models, in our proposed model, we introduce an additional internal base node, and the intrinsic base resistance is shifted into this internal base node, which can theoretically explain the anomalous change in the intrinsic bias-dependent collector resistance in the conventional compact model. (paper)

  4. The Model of Lake Operation in Water Transfer Projects Based on the Theory of Water- right

    Science.gov (United States)

    Bi-peng, Yan; Chao, Liu; Fang-ping, Tang

    the lake operation is a very important content in Water Transfer Projects. The previous studies have not any related to water-right and water- price previous. In this paper, water right is divided into three parts, one is initialization waterright, another is by investment, and the third is government's water- right re-distribution. The water-right distribution model is also build. After analyzing the cost in water transfer project, a model and computation method for the capacity price as well as quantity price is proposed. The model of lake operation in water transfer projects base on the theory of water- right is also build. The simulation regulation for the lake was carried out by using historical data and Genetic Algorithms. Water supply and impoundment control line of the lake was proposed. The result can be used by south to north water transfer projects.

  5. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  6. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  7. Previous exercise training has a beneficial effect on renal and cardiovascular function in a model of diabetes.

    Directory of Open Access Journals (Sweden)

    Kleiton Augusto dos Santos Silva

    Full Text Available Exercise training (ET is an important intervention for chronic diseases such as diabetes mellitus (DM. However, it is not known whether previous exercise training intervention alters the physiological and medical complications of these diseases. We investigated the effects of previous ET on the progression of renal disease and cardiovascular autonomic control in rats with streptozotocin (STZ-induced DM. Male Wistar rats were divided into five groups. All groups were followed for 15 weeks. Trained control and trained diabetic rats underwent 10 weeks of exercise training, whereas previously trained diabetic rats underwent 14 weeks of exercise training. Renal function, proteinuria, renal sympathetic nerve activity (RSNA and the echocardiographic parameters autonomic modulation and baroreflex sensitivity (BRS were evaluated. In the previously trained group, the urinary albumin/creatinine ratio was reduced compared with the sedentary diabetic and trained diabetic groups (p<0.05. Additionally, RSNA was normalized in the trained diabetic and previously trained diabetic animals (p<0.05. The ejection fraction was increased in the previously trained diabetic animals compared with the diabetic and trained diabetic groups (p<0.05, and the myocardial performance index was improved in the previously trained diabetic group compared with the diabetic and trained diabetic groups (p<0.05. In addition, the previously trained rats had improved heart rate variability and BRS in the tachycardic response and bradycardic response in relation to the diabetic group (p<0.05. This study demonstrates that previous ET improves the functional damage that affects DM. Additionally, our findings suggest that the development of renal and cardiac dysfunction can be minimized by 4 weeks of ET before the induction of DM by STZ.

  8. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  9. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  10. 78 FR 47546 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2013-08-06

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Model... Aviation Authority of Israel (CAAI), which is the aviation authority for Israel, has issued Israeli...

  11. Sentence‐Chain Based Seq2seq Model for Corpus Expansion

    Directory of Open Access Journals (Sweden)

    Euisok Chung

    2017-08-01

    Full Text Available This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence‐chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4‐times the number of n‐grams with superior performance for English text.

  12. Discrete Event System Based Pyroprocessing Modeling and Simulation: Oxide Reduction

    International Nuclear Information System (INIS)

    Lee, H. J.; Ko, W. I.; Choi, S. Y.; Kim, S. K.; Hur, J. M.; Choi, E. Y.; Im, H. S.; Park, K. I.; Kim, I. T.

    2014-01-01

    Dynamic changes according to the batch operation cannot be predicted in an equilibrium material flow. This study began to build a dynamic material balance model based on the previously developed pyroprocessing flowsheet. As a mid- and long-term research, an integrated pyroprocessing simulator is being developed at the Korea Atomic Energy Research Institute (KAERI) to cope with a review on the technical feasibility, safeguards assessment, conceptual design of facility, and economic feasibility evaluation. The most fundamental thing in such a simulator development is to establish the dynamic material flow framework. This study focused on the operation modeling of pyroprocessing to implement a dynamic material flow. As a case study, oxide reduction was investigated in terms of a dynamic material flow. DES based modeling was applied to build a pyroprocessing operation model. A dynamic material flow as the basic framework for an integrated pyroprocessing was successfully implemented through ExtendSim's internal database and item blocks. Complex operation logic behavior was verified, for example, an oxide reduction process in terms of dynamic material flow. Compared to the equilibrium material flow, a model-based dynamic material flow provides such detailed information that a careful analysis of every batch is necessary to confirm the dynamic material balance results. With the default scenario of oxide reduction, the batch mass balance was verified in comparison with a one-year equilibrium mass balance. This study is still under progress with a mid-and long-term goal, the development of a multi-purpose pyroprocessing simulator that is able to cope with safeguards assessment, economic feasibility, technical evaluation, conceptual design, and support of licensing for a future pyroprocessing facility

  13. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  14. A density model based on the Modified Quasichemical Model and applied to the (NaCl + KCl + ZnCl2) liquid

    International Nuclear Information System (INIS)

    Ouzilleau, Philippe; Robelin, Christian; Chartrand, Patrice

    2012-01-01

    Highlights: ► A model for the density of multicomponent inorganic liquids. ► The density model is based on the Modified Quasichemical Model. ► Application to the (NaCl + KCl + ZnCl 2 ) ternary liquid. ► A Kohler–Toop-like asymmetric interpolation method was used. - Abstract: A theoretical model for the density of multicomponent inorganic liquids based on the Modified Quasichemical Model has been presented previously. By introducing in the Gibbs free energy of the liquid phase temperature-dependent molar volume expressions for the pure components and pressure-dependent excess parameters for the binary (and sometimes higher-order) interactions, it is possible to reproduce, and eventually predict, the molar volume and the density of the multicomponent liquid phase using standard interpolation methods. In the present article, this density model is applied to the (NaCl + KCl + ZnCl 2 ) ternary liquid and a Kohler–Toop-like asymmetric interpolation method is used. All available density data for the (NaCl + KCl + ZnCl 2 ) liquid were collected and critically evaluated, and optimized pressure-dependent model parameters have been found. This new volumetric model can be used with Gibbs free energy minimization software, to calculate the molar volume and the density of (NaCl + KCl + ZnCl 2 ) ternary melts.

  15. Using model-based screening to help discover unknown environmental contaminants.

    Science.gov (United States)

    McLachlan, Michael S; Kierkegaard, Amelie; Radke, Michael; Sobek, Anna; Malmvärn, Anna; Alsberg, Tomas; Arnot, Jon A; Brown, Trevor N; Wania, Frank; Breivik, Knut; Xu, Shihe

    2014-07-01

    Of the tens of thousands of chemicals in use, only a small fraction have been analyzed in environmental samples. To effectively identify environmental contaminants, methods to prioritize chemicals for analytical method development are required. We used a high-throughput model of chemical emissions, fate, and bioaccumulation to identify chemicals likely to have high concentrations in specific environmental media, and we prioritized these for target analysis. This model-based screening was applied to 215 organosilicon chemicals culled from industrial chemical production statistics. The model-based screening prioritized several recognized organosilicon contaminants and generated hypotheses leading to the selection of three chemicals that have not previously been identified as potential environmental contaminants for target analysis. Trace analytical methods were developed, and the chemicals were analyzed in air, sewage sludge, and sediment. All three substances were found to be environmental contaminants. Phenyl-tris(trimethylsiloxy)silane was present in all samples analyzed, with concentrations of ∼50 pg m(-3) in Stockholm air and ∼0.5 ng g(-1) dw in sediment from the Stockholm archipelago. Tris(trifluoropropyl)trimethyl-cyclotrisiloxane and tetrakis(trifluoropropyl)tetramethyl-cyclotetrasiloxane were found in sediments from Lake Mjøsa at ∼1 ng g(-1) dw. The discovery of three novel environmental contaminants shows that models can be useful for prioritizing chemicals for exploratory assessment.

  16. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    Science.gov (United States)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  17. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  18. GIS-Based Planning and Modeling for Renewable Energy: Challenges and Future Research Avenues

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2014-05-01

    Full Text Available In the face of the broad political call for an “energy turnaround”, we are currently witnessing three essential trends with regard to energy infrastructure planning, energy generation and storage: from planned production towards fluctuating production on the basis of renewable energy sources, from centralized generation towards decentralized generation and from expensive energy carriers towards cost-free energy carriers. These changes necessitate considerable modifications of the energy infrastructure. Even though most of these modifications are inherently motivated by geospatial questions and challenges, the integration of energy system models and Geographic Information Systems (GIS is still in its infancy. This paper analyzes the shortcomings of previous approaches in using GIS in renewable energy-related projects, extracts distinct challenges from these previous efforts and, finally, defines a set of core future research avenues for GIS-based energy infrastructure planning with a focus on the use of renewable energy. These future research avenues comprise the availability base data and their “geospatial awareness”, the development of a generic and unified data model, the usage of volunteered geographic information (VGI and crowdsourced data in analysis processes, the integration of 3D building models and 3D data analysis, the incorporation of network topologies into GIS, the harmonization of the heterogeneous views on aggregation issues in the fields of energy and GIS, fine-grained energy demand estimation from freely-available data sources, decentralized storage facility planning, the investigation of GIS-based public participation mechanisms, the transition from purely structural to operational planning, data privacy aspects and, finally, the development of a new dynamic power market design.

  19. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  20. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  1. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  2. 77 FR 44113 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2012-07-27

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (Type Certificate previously held by Israel Aircraft Industries, Ltd.) Model Gulfstream G150... to the manufacturer. This action was prompted by a report from the Civil Aviation Authority of Israel...

  3. 77 FR 58323 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2012-09-20

    ... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Airplanes AGENCY... Previously Held by Israel Aircraft Industries, Ltd.) Model Gulfstream G150 airplanes. This proposed AD was.... Discussion The Civil Aviation Authority of Israel (CAAI), which is the aviation authority for Israel, has...

  4. 77 FR 32069 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2012-05-31

    ... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Airplanes AGENCY... previously held by Israel Aircraft Industries, Ltd.) Model Galaxy and Gulfstream 200 airplanes. This proposed... receive about this proposed AD. Discussion The Civil Aviation Authority of Israel (CAAI), which is the...

  5. An improved data base for the description of dairy cows in the German agricultural emission model GAS-EM

    DEFF Research Database (Denmark)

    Dämmgen, Ulrich; Haenel, Hans-Dieter; Rösemann, Claus

    2010-01-01

    The application of the previously published detailed model describing dairy cow husbandry in the German agricultural emission model requires an extended and improved data base. This concerns animal weights, weight gains, regional feed regimes, feeding requirements and feed properties as well...... of animal performance. The knowledge of hitherto unpublished data allows for a recalculation and revaluation of nitrogen excretions and ammonia emission factors....

  6. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  7. Advanced model for expansion of natural gas distribution networks based on geographic information systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Rosado, I.J.; Fernandez-Jimenez, L.A.; Garcia-Garrido, E.; Zorzano-Santamaria, P.; Zorzano-Alba, E. [La Rioja Univ., La Rioja (Spain). Dept. of Electrical Engineering; Miranda, V.; Montneiro, C. [Porto Univ., Porto (Portugal). Faculty of Engineering]|[Inst. de Engenharia de Sistemas e Computadores do Porto, Porto (Portugal)

    2005-07-01

    An advanced geographic information system (GIS) model of natural gas distribution networks was presented. The raster-based model was developed to evaluate costs associated with the expansion of electrical networks due to increased demand in the La Rioja region of Spain. The model was also used to evaluate costs associated with maintenance and amortization of the already existing distribution network. Expansion costs of the distribution network were modelled in various demand scenarios. The model also considered a variety of technical factors associated with pipeline length and topography. Soil and slope data from previous pipeline projects were used to estimate real costs per unit length of pipeline. It was concluded that results obtained by the model will be used by planners to select zones where expansion is economically feasible. 4 refs., 5 figs.

  8. Comparison of sensorless dimming control based on building modeling and solar power generation

    International Nuclear Information System (INIS)

    Lee, Naeun; Kim, Jonghun; Jang, Cheolyong; Sung, Yoondong; Jeong, Hakgeun

    2015-01-01

    Artificial lighting in office buildings accounts for about 30% of the total building energy consumption. Lighting energy is important to reduce building energy consumption since artificial lighting typically has a relatively large energy conversion factor. Therefore, previous studies have proposed a dimming control using daylight. When applied dimming control, method based on building modeling does not need illuminance sensors. Thus, it can be applied to existing buildings that do not have illuminance sensors. However, this method does not accurately reflect real-time weather conditions. On the other hand, solar power generation from a PV (photovoltaic) panel reflects real-time weather conditions. The PV panel as the sensor improves the accuracy of dimming control by reflecting disturbance. Therefore, we compared and analyzed two types of sensorless dimming controls: those based on the building modeling and those that based on solar power generation using PV panels. In terms of energy savings, we found that a dimming control based on building modeling is more effective than that based on solar power generation by about 6%. However, dimming control based on solar power generation minimizes the inconvenience to occupants and can also react to changes in solar radiation entering the building caused by dirty window. - Highlights: • We conducted sensorless dimming control based on solar power generation. • Dimming controls using building modeling and solar power generation were compared. • The real time weather conditions can be considered by using solar power generation. • Dimming control using solar power generation minimizes inconvenience to occupants

  9. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  10. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  11. An event-based model for contracts

    Directory of Open Access Journals (Sweden)

    Tiziana Cimoli

    2013-02-01

    Full Text Available We introduce a basic model for contracts. Our model extends event structures with a new relation, which faithfully captures the circular dependencies among contract clauses. We establish whether an agreement exists which respects all the contracts at hand (i.e. all the dependencies can be resolved, and we detect the obligations of each participant. The main technical contribution is a correspondence between our model and a fragment of the contract logic PCL. More precisely, we show that the reachable events are exactly those which correspond to provable atoms in the logic. Despite of this strong correspondence, our model improves previous work on PCL by exhibiting a finer-grained notion of culpability, which takes into account the legitimate orderings of events.

  12. Metoprolol Dose Equivalence in Adult Men and Women Based on Gender Differences: Pharmacokinetic Modeling and Simulations

    Directory of Open Access Journals (Sweden)

    Andy R. Eugene

    2016-11-01

    Full Text Available Recent meta-analyses and publications over the past 15 years have provided evidence showing there are considerable gender differences in the pharmacokinetics of metoprolol. Throughout this time, there have not been any research articles proposing a gender stratified dose-adjustment resulting in an equivalent total drug exposure. Metoprolol pharmacokinetic data was obtained from a previous publication. Data was modeled using nonlinear mixed effect modeling using the MONOLIX software package to quantify metoprolol concentration–time data. Gender-stratified dosing simulations were conducted to identify equivalent total drug exposure based on a 100 mg dose in adults. Based on the pharmacokinetic modeling and simulations, a 50 mg dose in adult women provides an approximately similar metoprolol drug exposure to a 100 mg dose in adult men.

  13. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  14. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  15. Process-based interpretation of conceptual hydrological model performance using a multinational catchment set

    Science.gov (United States)

    Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles

    2017-08-01

    Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.

  16. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Science.gov (United States)

    Hu, Pingzhao; Janga, Sarath Chandra; Babu, Mohan; Díaz-Mejía, J Javier; Butland, Gareth; Yang, Wenhong; Pogoutse, Oxana; Guo, Xinghua; Phanse, Sadhna; Wong, Peter; Chandran, Shamanta; Christopoulos, Constantine; Nazarians-Armavil, Anaies; Nasseri, Negin Karimi; Musso, Gabriel; Ali, Mehrab; Nazemof, Nazila; Eroukova, Veronika; Golshani, Ashkan; Paccanaro, Alberto; Greenblatt, Jack F; Moreno-Hagelsieb, Gabriel; Emili, Andrew

    2009-04-28

    One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans). Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  17. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  18. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    Science.gov (United States)

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  19. A unified model of time perception accounts for duration-based and beat-based timing mechanisms

    Directory of Open Access Journals (Sweden)

    Sundeep eTeki

    2012-01-01

    Full Text Available Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary and supplementary motor area, pre-motor cortex and higher regions such as the prefrontal cortex.In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011. However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent.Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex th

  20. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  1. Analysis of Food Hub Commerce and Participation Using Agent-Based Modeling: Integrating Financial and Social Drivers.

    Science.gov (United States)

    Krejci, Caroline C; Stone, Richard T; Dorneich, Michael C; Gilbert, Stephen B

    2016-02-01

    Factors influencing long-term viability of an intermediated regional food supply network (food hub) were modeled using agent-based modeling techniques informed by interview data gathered from food hub participants. Previous analyses of food hub dynamics focused primarily on financial drivers rather than social factors and have not used mathematical models. Based on qualitative and quantitative data gathered from 22 customers and 11 vendors at a midwestern food hub, an agent-based model (ABM) was created with distinct consumer personas characterizing the range of consumer priorities. A comparison study determined if the ABM behaved differently than a model based on traditional economic assumptions. Further simulation studies assessed the effect of changes in parameters, such as producer reliability and the consumer profiles, on long-term food hub sustainability. The persona-based ABM model produced different and more resilient results than the more traditional way of modeling consumers. Reduced producer reliability significantly reduced trade; in some instances, a modest reduction in reliability threatened the sustainability of the system. Finally, a modest increase in price-driven consumers at the outset of the simulation quickly resulted in those consumers becoming a majority of the overall customer base. Results suggest that social factors, such as desire to support the community, can be more important than financial factors. An ABM of food hub dynamics, based on human factors data gathered from the field, can be a useful tool for policy decisions. Similar approaches can be used for modeling customer dynamics with other sustainable organizations. © 2015, Human Factors and Ergonomics Society.

  2. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multizone Reaction Kinetics: Modeling of Decarburization

    Science.gov (United States)

    Rout, Bapin Kumar; Brooks, Geoffrey; Akbar Rhamdhani, M.; Li, Zushu; Schrama, Frank N. H.; Overbosch, Aart

    2018-03-01

    In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

  3. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multizone Reaction Kinetics: Modeling of Decarburization

    Science.gov (United States)

    Rout, Bapin Kumar; Brooks, Geoffrey; Akbar Rhamdhani, M.; Li, Zushu; Schrama, Frank N. H.; Overbosch, Aart

    2018-06-01

    In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

  4. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  5. Differences between previously married and never married 'gay' men: family background, childhood experiences and current attitudes.

    Science.gov (United States)

    Higgins, Daryl J

    2004-01-01

    Despite a large body of literature on the development of sexual orientation, little is known about why some gay men have been (or remain) married to a woman. In the current study, a self-selected sample of 43 never married gay men ('never married') and 26 gay men who were married to a woman ('previously married') completed a self-report questionnaire. Hypotheses were based on five possible explanations for gay men's marriages: (a) differences in sexual orientation (i.e., bisexuality); (b) internalized homophobia; (c) religious intolerance; (d) confusion created because of childhood/adolescent sexual experiences; and/or (e) poor psychological adjustment. Previously married described their families' religious beliefs as more fundamentalist than never married. No differences were found between married' and never married' ratings of their sexual orientation and identity, and levels of homophobia and self-depreciation. Family adaptability and family cohesion and the degree to which respondents reported having experienced child maltreatment did not distinguish between previously married and never married. The results highlight how little is understood of the reasons why gay men marry, and the need to develop an adequate theoretical model.

  6. DISPLACE: a dynamic, individual-based model for spatial fishing planning and effort displacement: Integrating underlying fish population models

    DEFF Research Database (Denmark)

    Bastardie, Francois; Nielsen, J. Rasmus; Miethe, Tanja

    or to the alteration of individual fishing patterns. We demonstrate that integrating the spatial activity of vessels and local fish stock abundance dynamics allow for interactions and more realistic predictions of fishermen behaviour, revenues and stock abundance......We previously developed a spatially explicit, individual-based model (IBM) evaluating the bio-economic efficiency of fishing vessel movements between regions according to the catching and targeting of different species based on the most recent high resolution spatial fishery data. The main purpose...... was to test the effects of alternative fishing effort allocation scenarios related to fuel consumption, energy efficiency (value per litre of fuel), sustainable fish stock harvesting, and profitability of the fisheries. The assumption here was constant underlying resource availability. Now, an advanced...

  7. Influence of Previous Knowledge, Language Skills and Domain-specific Interest on Observation Competency

    Science.gov (United States)

    Kohlhauf, Lucia; Rutke, Ulrike; Neuhaus, Birgit

    2011-10-01

    Many epoch-making biological discoveries (e.g. Darwinian Theory) were based upon observations. Nevertheless, observation is often regarded as `just looking' rather than a basic scientific skill. As observation is one of the main research methods in biological sciences, it must be considered as an independent research method and systematic practice of this method is necessary. Because observation skills form the basis of further scientific methods (e.g. experiments or comparisons) and children from the age of 4 years are able to independently generate questions and hypotheses, it seems possible to foster observation competency at a preschool level. To be able to provide development-adequate individual fostering of this competency, it is first necessary to assess each child's competency. Therefore, drawing on the recent literature, we developed in this study a competency model that was empirically evaluated within learners ( N = 110) from different age groups, from kindergarten to university. In addition, we collected data on language skills, domain-specific interest and previous knowledge to analyse coherence between these skills and observation competency. The study showed as expected that previous knowledge had a high impact on observation competency, whereas the influence of domain-specific interest was nonexistent. Language skills were shown to have a weak influence. By utilising the empirically validated model consisting of three dimensions (`Describing', `Scientific reasoning' and `Interpreting') and three skill levels, it was possible to assess each child's competency level and to develop and evaluate guided play activities to individually foster a child's observation competency.

  8. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  9. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  10. Parameterizing road construction in route-based road weather models: can ground-penetrating radar provide any answers?

    International Nuclear Information System (INIS)

    Hammond, D S; Chapman, L; Thornes, J E

    2011-01-01

    A ground-penetrating radar (GPR) survey of a 32 km mixed urban and rural study route is undertaken to assess the usefulness of GPR as a tool for parameterizing road construction in a route-based road weather forecast model. It is shown that GPR can easily identify even the smallest of bridges along the route, which previous thermal mapping surveys have identified as thermal singularities with implications for winter road maintenance. Using individual GPR traces measured at each forecast point along the route, an inflexion point detection algorithm attempts to identify the depth of the uppermost subsurface layers at each forecast point for use in a road weather model instead of existing ordinal road-type classifications. This approach has the potential to allow high resolution modelling of road construction and bridge decks on a scale previously not possible within a road weather model, but initial results reveal that significant future research will be required to unlock the full potential that this technology can bring to the road weather industry. (technical design note)

  11. The choices, choosing model of quality of life: linkages to a science base.

    Science.gov (United States)

    Gurland, Barry J; Gurland, Roni V

    2009-01-01

    A previous paper began with a critical review of current models and measures of quality of life and then proposed criteria for judging the relative merits of alternative models: preference was given to finding a model with explicit mechanisms, linkages to a science base, a means of identifying deficits amenable to rational restorative interventions, and with embedded values of the whole person. A conjectured model, based on the processes of accessing choices and choosing among them, matched the proposed criteria. The choices and choosing (c-c) process is an evolved adaptive mechanism dedicated to the pursuit of quality of life, driven by specific biological and psychological systems, and influenced also by social and environmental forces. In this paper the c-c model is examined for its potential to strengthen the science base for the field of quality of life and thus to unify many approaches to concept and measurement. A third paper in this set will lay out a guide to applying the c-c model in evaluating impairments of quality of life and will tie this evaluation to corresponding interventions aimed at relieving restrictions or distortions of the c-c process; thus helping people to preserve and improve their quality of life. The fourth paper will demonstrate empirical analyses of the relationship between health imposed restrictions of options for living and conventional indicators of diminished quality of life. (c) 2008 John Wiley & Sons, Ltd.

  12. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  13. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    Science.gov (United States)

    Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E

    2012-01-01

    Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  14. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    Directory of Open Access Journals (Sweden)

    David Gurarie

    Full Text Available BACKGROUND: Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns. METHODOLOGY/PRINCIPAL FINDINGS: We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. CONCLUSIONS/SIGNIFICANCE: Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  15. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  16. An acceptance model for smart glasses based tourism augmented reality

    Science.gov (United States)

    Obeidy, Waqas Khalid; Arshad, Haslina; Huang, Jiung Yao

    2017-10-01

    Recent mobile technologies have revolutionized the way people experience their environment. Although, there is only limited research on users' acceptance of AR in the cultural tourism context, previous researchers have explored the opportunities of using augmented reality (AR) in order to enhance user experience. Recent AR research lack works that integrates dimensions which are specific to cultural tourism and smart glass specific context. Hence, this work proposes an AR acceptance model in the context of cultural heritage tourism and smart glasses capable of performing augmented reality. Therefore, in this paper we aim to present an AR acceptance model to understand the AR usage behavior and visiting intention for tourists who use Smart Glass based AR at UNESCO cultural heritage destinations in Malaysia. Furthermore, this paper identifies information quality, technology readiness, visual appeal, and facilitating conditions as external variables and key factors influencing visitors' beliefs, attitudes and usage intention.

  17. Markovian Building Blocks for Individual-Based Modelling

    DEFF Research Database (Denmark)

    Nilsson, Lars Anders Fredrik

    2007-01-01

    previous exposure to Markov chains in continuous time (see e.g. Grimmett and Stirzaker, 2001)). Markovian arrival processes are very general point processes that are relatively easy to analyse. They have, so far, been largely unknown to the ecological modelling community. The article C deals...

  18. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  19. Categorical QSAR models for skin sensitization based on local lymph node assay measures and both ground and excited state 4D-fingerprint descriptors

    Science.gov (United States)

    Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.

    2008-06-01

    In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.

  20. Modelling of the acid base properties of two thermophilic bacteria at different growth times

    Science.gov (United States)

    Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J.

    2008-09-01

    Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems.

  1. Evidence-based quantification of uncertainties induced via simulation-based modeling

    International Nuclear Information System (INIS)

    Riley, Matthew E.

    2015-01-01

    The quantification of uncertainties in simulation-based modeling traditionally focuses upon quantifying uncertainties in the parameters input into the model, referred to as parametric uncertainties. Often neglected in such an approach are the uncertainties induced by the modeling process itself. This deficiency is often due to a lack of information regarding the problem or the models considered, which could theoretically be reduced through the introduction of additional data. Because of the nature of this epistemic uncertainty, traditional probabilistic frameworks utilized for the quantification of uncertainties are not necessarily applicable to quantify the uncertainties induced in the modeling process itself. This work develops and utilizes a methodology – incorporating aspects of Dempster–Shafer Theory and Bayesian model averaging – to quantify uncertainties of all forms for simulation-based modeling problems. The approach expands upon classical parametric uncertainty approaches, allowing for the quantification of modeling-induced uncertainties as well, ultimately providing bounds on classical probability without the loss of epistemic generality. The approach is demonstrated on two different simulation-based modeling problems: the computation of the natural frequency of a simple two degree of freedom non-linear spring mass system and the calculation of the flutter velocity coefficient for the AGARD 445.6 wing given a subset of commercially available modeling choices. - Highlights: • Modeling-induced uncertainties are often mishandled or ignored in the literature. • Modeling-induced uncertainties are epistemic in nature. • Probabilistic representations of modeling-induced uncertainties are restrictive. • Evidence theory and Bayesian model averaging are integrated. • Developed approach is applicable for simulation-based modeling problems

  2. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  3. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  4. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    Science.gov (United States)

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  5. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  6. Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling

    Science.gov (United States)

    Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.

    2017-07-01

    With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.

  7. Anatomically based lower limb nerve model for electrical stimulation

    Directory of Open Access Journals (Sweden)

    Soboleva Tanya K

    2007-12-01

    Full Text Available Abstract Background Functional Electrical Stimulation (FES is a technique that aims to rehabilitate or restore functionality of skeletal muscles using external electrical stimulation. Despite the success achieved within the field of FES, there are still a number of questions that remain unanswered. One way of providing input to the answers is through the use of computational models. Methods This paper describes the development of an anatomically based computer model of the motor neurons in the lower limb of the human leg and shows how it can be used to simulate electrical signal propagation from the beginning of the sciatic nerve to a skeletal muscle. One-dimensional cubic Hermite finite elements were used to represent the major portions of the lower limb nerves. These elements were fit to data that had been digitised using images from the Visible Man project. Nerves smaller than approximately 1 mm could not be seen in the images, and thus a tree-branching algorithm was used to connect the ends of the fitted nerve model to the respective skeletal muscle. To simulate electrical propagation, a previously published mammalian nerve model was implemented and solved on the anatomically based nerve mesh using a finite difference method. The grid points for the finite difference method were derived from the fitted finite element mesh. By adjusting the tree-branching algorithm, it is possible to represent different levels of motor-unit recruitment. Results To illustrate the process of a propagating nerve stimulus to a muscle in detail, the above method was applied to the nerve tree that connects to the human semitendinosus muscle. A conduction velocity of 89.8 m/s was obtained for a 15 μm diameter nerve fibre. This signal was successfully propagated down the motor neurons to a selected group of motor units in the muscle. Conclusion An anatomically and physiologically based model of the posterior motor neurons in the human lower limb was developed. This

  8. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  9. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  10. Dementia and well-being: A conceptual framework based on Tom Kitwood's model of needs.

    Science.gov (United States)

    Kaufmann, Elke G; Engel, Sabine A

    2016-07-01

    The topic of well-being is becoming increasingly significant as a key outcome measure in dementia care. Previous work on personhood of individuals with dementia suggests that their subjective well-being can be described in terms of comfort, inclusion, identity, occupation and attachment The study aimed to examine Tom Kitwood's model of psychological needs and well-being in dementia based on the self-report of individuals with moderate or severe dementia and to differentiate and elaborate this model in the light of the empirical qualitative data. Nineteen inhabitants of a special long-term care unit were interviewed using a semi-structured interview. Data were analysed using content analysis. Thirty components within Kitwood's model have been identified. A conceptual framework of subjective well-being in dementia was developed based on a theoretical background. The study was able to find indications that Kitwood's model has empirical relevance. Nevertheless, it requires to be extended by the domain agency. Furthermore, the study suggests that individuals with dementia are important informants of their subjective well-being. © The Author(s) 2014.

  11. Optimal portfolio model based on WVAR

    OpenAIRE

    Hao, Tianyu

    2012-01-01

    This article is focused on using a new measurement of risk-- Weighted Value at Risk to develop a new method of constructing initiate from the TVAR solving problem, based on MATLAB software, using the historical simulation method (avoiding income distribution will be assumed to be normal), the results of previous studies also based on, study the U.S. Nasdaq composite index, combining the Simpson formula for the solution of TVAR and its deeply study; then, through the representation of WVAR for...

  12. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    Science.gov (United States)

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  14. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  15. Prototype-based Models for the Supervised Learning of Classification Schemes

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2017-06-01

    An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.

  16. Model-Based Learning Environment Based on The Concept IPS School-Based Management

    Directory of Open Access Journals (Sweden)

    Hamid Darmadi

    2017-03-01

    Full Text Available The results showed: (1 learning model IPS-oriented environment can grow and not you love the cultural values of the area as a basis for the development of national culture, (2 community participation, and the role of government in implementing learning model of IPS-based environment provides a positive impact for the improvement of management school resources, (3 learning model IPS-based environment effectively creating a way of life together peacefully, increase the intensity of togetherness and mutual respect (4 learning model IPS-based environment can improve student learning outcomes, (5 there are differences in the expression of attitudes and results learning among students who are located in the area of conflict with students who are outside the area of conflict (6 analysis of the scale of attitudes among school students da SMA result rewards high school students to the values of unity and nation, respect for diversity and peaceful coexistence, It is recommended that the Department of Education authority as an institution of Trustees and the development of social and cultural values in the province can apply IPS learning model based environments.

  17. Model based process-product design and analysis

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....

  18. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.; Alsaadi, Ahmad Salem; Ghaffour, NorEddine; Laleg-Kirati, Taous-Meriem

    2016-01-01

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  19. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.

    2016-10-03

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  20. The impact of previous knee injury on force plate and field-based measures of balance.

    Science.gov (United States)

    Baltich, Jennifer; Whittaker, Jackie; Von Tscharner, Vinzenz; Nettel-Aguirre, Alberto; Nigg, Benno M; Emery, Carolyn

    2015-10-01

    Individuals with post-traumatic osteoarthritis demonstrate increased sway during quiet stance. The prospective association between balance and disease onset is unknown. Improved understanding of balance in the period between joint injury and disease onset could inform secondary prevention strategies to prevent or delay the disease. This study examines the association between youth sport-related knee injury and balance, 3-10years post-injury. Participants included 50 individuals (ages 15-26years) with a sport-related intra-articular knee injury sustained 3-10years previously and 50 uninjured age-, sex- and sport-matched controls. Force-plate measures during single-limb stance (center-of-pressure 95% ellipse-area, path length, excursion, entropic half-life) and field-based balance scores (triple single-leg hop, star-excursion, unipedal dynamic balance) were collected. Descriptive statistics (mean within-pair difference; 95% confidence intervals) were used to compare groups. Linear regression (adjusted for injury history) was used to assess the relationship between ellipse-area and field-based scores. Injured participants on average demonstrated greater medio-lateral excursion [mean within-pair difference (95% confidence interval); 2.8mm (1.0, 4.5)], more regular medio-lateral position [10ms (2, 18)], and shorter triple single-leg hop distances [-30.9% (-8.1, -53.7)] than controls, while no between group differences existed for the remaining outcomes. After taking into consideration injury history, triple single leg hop scores demonstrated a linear association with ellipse area (β=0.52, 95% confidence interval 0.01, 1.01). On average the injured participants adjusted their position less frequently and demonstrated a larger magnitude of movement during single-limb stance compared to controls. These findings support the evaluation of balance outcomes in the period between knee injury and post-traumatic osteoarthritis onset. Copyright © 2015 Elsevier Ltd. All rights

  1. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  2. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  3. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  4. An alternate metabolic hypothesis for a binary mixture of trichloroethylene and carbon tetrachloride: application of physiologically based pharmacokinetic (PBPK) modeling in rats.

    Science.gov (United States)

    Carbon tetrachloride (CC4) and trichloroethylene (TCE) are hepatotoxic volatile organic compounds (VOCs) and environmental contaminants. Previous physiologically based pharmacokinetic (PBPK) models describe the kinetics ofindividual chemical disposition and metabolic clearance fo...

  5. Modeling Multioperator Multi-UAV Operator Attention Allocation Problem Based on Maximizing the Global Reward

    Directory of Open Access Journals (Sweden)

    Yuhang Wu

    2016-01-01

    Full Text Available This paper focuses on the attention allocation problem (AAP in modeling multioperator multi-UAV (MOMU, with the operator model and task properties taken into consideration. The model of MOMU operator AAP based on maximizing the global reward is established and used to allocate tasks to all operators as well as set work time and rest time to each task simultaneously for operators. The proposed model is validated in Matlab simulation environment, using the immune algorithm and dynamic programming algorithm to evaluate the performance of the model in terms of the reward value with regard to the work time, rest time, and task allocation. The result shows that the total reward of the proposed model is larger than the one obtained from previously published methods using local maximization and the total reward of our method has an exponent-like relation with the task arrival rate. The proposed model can improve the operators’ task processing efficiency in the MOMU command and control scenarios.

  6. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-09-01

    Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.

  7. Shape models of asteroids based on lightcurve observations with BlueEye600 robotic observatory

    Science.gov (United States)

    Ďurech, Josef; Hanuš, Josef; Brož, Miroslav; Lehký, Martin; Behrend, Raoul; Antonini, Pierre; Charbonnel, Stephane; Crippa, Roberto; Dubreuil, Pierre; Farroni, Gino; Kober, Gilles; Lopez, Alain; Manzini, Federico; Oey, Julian; Poncy, Raymond; Rinner, Claudine; Roy, René

    2018-04-01

    We present physical models, i.e. convex shapes, directions of the rotation axis, and sidereal rotation periods, of 18 asteroids out of which 10 are new models and 8 are refined models based on much larger data sets than in previous work. The models were reconstructed by the lightcurve inversion method from archived publicly available lightcurves and our new observations with BlueEye600 robotic observatory. One of the new results is the shape model of asteroid (1663) van den Bos with the rotation period of 749 h, which makes it the slowest rotator with known shape. We describe our strategy for target selection that aims at fast production of new models using the enormous potential of already available photometry stored in public databases. We also briefly describe the control software and scheduler of the robotic observatory and we discuss the importance of building a database of asteroid models for studying asteroid physical properties in collisional families.

  8. A Multi-Agent Traffic Control Model Based on Distributed System

    Directory of Open Access Journals (Sweden)

    Qian WU

    2014-06-01

    Full Text Available With the development of urbanization construction, urban travel has become a quite thorny and imminent problem. Some previous researches on the large urban traffic systems easily change into NPC problems. We purpose a multi-agent inductive control model based on the distributed approach. To describe the real traffic scene, this model designs four different types of intelligent agents, i.e. we regard each lane, route, intersection and traffic region as different types of intelligent agents. Each agent can achieve the real-time traffic data from its neighbor agents, and decision-making agents establish real-time traffic signal plans through the communication between local agents and their neighbor agents. To evaluate the traffic system, this paper takes the average delay, the stopped time and the average speed as performance parameters. Finally, the distributed multi-agent is simulated on the VISSIM simulation platform, the simulation results show that the multi-agent system is more effective than the adaptive control system in solving the traffic congestion.

  9. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  10. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Directory of Open Access Journals (Sweden)

    Pingzhao Hu

    2009-04-01

    Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  11. Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.

    Science.gov (United States)

    Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad

    2018-04-21

    In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.

  12. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha; Kalogerakis, Evangelos; Guibas, Leonidas; Koltun, Vladlen

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling

  13. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  14. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    Science.gov (United States)

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  15. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    Science.gov (United States)

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  16. Image-Based Models for Specularity Propagation in Diminished Reality.

    Science.gov (United States)

    Said, Souheil Hadj; Tamaazousti, Mohamed; Bartoli, Adrien

    2018-07-01

    The aim of Diminished Reality (DR) is to remove a target object in a live video stream seamlessly. In our approach, the area of the target object is replaced with new texture that blends with the rest of the image. The result is then propagated to the next frames of the video. One of the important stages of this technique is to update the target region with respect to the illumination change. This is a complex and recurrent problem when the viewpoint changes. We show that the state-of-the-art in DR fails in solving this problem, even under simple scenarios. We then use local illumination models to address this problem. According to these models, the variation in illumination only affects the specular component of the image. In the context of DR, the problem is therefore solved by propagating the specularities in the target area. We list a set of structural properties of specularities which we incorporate in two new models for specularity propagation. Our first model includes the same property as the previous approaches, which is the smoothness of illumination variation, but has a different estimation method based on the Thin-Plate Spline. Our second model incorporates more properties of the specularity's shape on planar surfaces. Experimental results on synthetic and real data show that our strategy substantially improves the rendering quality compared to the state-of-the-art in DR.

  17. A digital waveguide-based approach for Clavinet modeling and synthesis

    Science.gov (United States)

    Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan

    2013-12-01

    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.

  18. ProvenCare perinatal: a model for delivering evidence/ guideline-based care for perinatal populations.

    Science.gov (United States)

    Berry, Scott A; Laam, Leslie A; Wary, Andrea A; Mateer, Harry O; Cassagnol, Hans P; McKinley, Karen E; Nolan, Ruth A

    2011-05-01

    Geisinger Health System (GHS) has applied its ProvenCare model to demonstrate that a large integrated health care delivery system, enabled by an electronic health record (EHR), could reengineer a complicated clinical process, reduce unwarranted variation, and provide evidence-based care for patients with a specified clinical condition. In 2007 GHS began to apply the model to a more complicated, longer-term condition of "wellness"--perinatal care. ADAPTING PROVENCARE TO PERINATAL CARE: The ProvenCare Perinatal initiative was more complex than the five previous ProvenCare endeavors in terms of breadth, scope, and duration. Each of the 22 sites created a process flow map to depict the current, real-time process at each location. The local practice site providers-physicians and mid-level practitioners-reached consensus on 103 unique best practice measures (BPMs), which would be tracked for every patient. These maps were then used to create a single standardized pathway that included the BPMs but also preserved some unique care offerings that reflected the needs of the local context. A nine-phase methodology, expanded from the previous six-phase model, was implemented on schedule. Pre- to postimplementation improvement occurred for all seven BPMs or BPM bundles that were considered the most clinically relevant, with five statistically significant. In addition, the rate of primary cesarean sections decreased by 32%, and birth trauma remained unchanged as the number of vaginal births increased. Preliminary experience suggests that integrating evidence/guideline-based best practices into work flows in inpatient and outpatient settings can achieve improvements in daily patient care processes and outcomes.

  19. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  20. A model-based framework for design of intensified enzyme-based processes

    DEFF Research Database (Denmark)

    Román-Martinez, Alicia

    This thesis presents a generic and systematic model-based framework to design intensified enzyme-based processes. The development of the presented methodology was motivated by the needs of the bio-based industry for a more systematic approach to achieve intensification in its production plants...... in enzyme-based processes which have found significant application in the pharmaceutical, food, and renewable fuels sector. The framework uses model-based strategies for (bio)-chemical process design and optimization, including the use of a superstructure to generate all potential reaction......(s)-separation(s) options according to a desired performance criteria and a generic mathematical model represented by the superstructure to derive the specific models corresponding to a specific process option. In principle, three methods of intensification of bioprocess are considered in this thesis: 1. enzymatic one...

  1. tPC-PSAFT modeling of gas solubility in imidazolium-based ionic liquids

    DEFF Research Database (Denmark)

    Karakatsani, Eirini; Economou, Ioannis; Kroon, M. C.

    2007-01-01

    The truncated perturbed chain-polar statistical associating fluid theory (tPC-PSAFT) is re-parametrized for imidazolium-based ionic liquids (ILs) by fitting IL density data over a wide temperature range and restricting the model to predict very low vapor pressure values, in agreement with recent...... experimental evidence. The new set of parameters is used for the correlation of carbon dioxide solubility in various ILs using a binary interaction parameter, k(ij). The correlated k(ij) values are much lower than the values used previously for the same mixtures (Kroon et al., J. Phys. Chem. B 2006, 110, 9262...

  2. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  3. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  4. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  5. The contribution of a central pattern generator in a reflex-based neuromuscular model

    Science.gov (United States)

    Dzeladini, Florin; van den Kieboom, Jesse; Ijspeert, Auke

    2014-01-01

    Although the concept of central pattern generators (CPGs) controlling locomotion in vertebrates is widely accepted, the presence of specialized CPGs in human locomotion is still a matter of debate. An interesting numerical model developed in the 90s’ demonstrated the important role CPGs could play in human locomotion, both in terms of stability against perturbations, and in terms of speed control. Recently, a reflex-based neuro-musculo-skeletal model has been proposed, showing a level of stability to perturbations similar to the previous model, without any CPG components. Although exhibiting striking similarities with human gaits, the lack of CPG makes the control of speed/step length in the model difficult. In this paper, we hypothesize that a CPG component will offer a meaningful way of controlling the locomotion speed. After introducing the CPG component in the reflex model, and taking advantage of the resulting properties, a simple model for gait modulation is presented. The results highlight the advantages of a CPG as feedforward component in terms of gait modulation. PMID:25018712

  6. The Contribution of a Central Pattern Generator in a Reflex-Based Neuromuscular Model

    Directory of Open Access Journals (Sweden)

    Florin eDzeladini

    2014-06-01

    Full Text Available Although the concept of central pattern generators (CPGs controlling locomotion in vertebrates is widely accepted, the presence of specialized CPGs in human locomotion is still a matter of debate. An interesting numerical model developed in the 90s' demonstrated the important role CPGs could play in human locomotion, both in terms of stability against perturbations, and in terms of speed control. Recently, a reflex-based neuro-musculo-skeletal model has been proposed, showing a level of stability to perturbations similar to the previous model, without any CPG components. Although exhibiting striking similarities with human gaits, the lack of CPG makes the control of speed/step length in the model difficult. In this paper, we hypothesize that a CPG component will offer a meaningful way of controlling the locomotion speed. After introducing the CPG component in the reflex model, and taking advantage of the resulting properties, a simple model for gait modulation is presented.The results highlight the advantages that a feedforward component can have in terms of gait modulation.

  7. Spatio-temporal Rich Model Based Video Steganalysis on Cross Sections of Motion Vector Planes.

    Science.gov (United States)

    Tasdemir, Kasim; Kurugollu, Fatih; Sezer, Sakir

    2016-05-11

    A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.

  8. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  9. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

    Directory of Open Access Journals (Sweden)

    Shuang Guan

    2017-10-01

    Full Text Available Many of the existing autoregressive moving average (ARMA forecast models are based on one main factor. In this paper, we proposed a new two-factor first-order ARMA forecast model based on fuzzy fluctuation logical relationships of both a main factor and a secondary factor of a historical training time series. Firstly, we generated a fluctuation time series (FTS for two factors by calculating the difference of each data point with its previous day, then finding the absolute means of the two FTSs. We then constructed a fuzzy fluctuation time series (FFTS according to the defined linguistic sets. The next step was establishing fuzzy fluctuation logical relation groups (FFLRGs for a two-factor first-order autoregressive (AR(1 model and forecasting the training data with the AR(1 model. Then we built FFLRGs for a two-factor first-order autoregressive moving average (ARMA(1,m model. Lastly, we forecasted test data with the ARMA(1,m model. To illustrate the performance of our model, we used real Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX and Dow Jones datasets as a secondary factor to forecast TAIEX. The experiment results indicate that the proposed two-factor fluctuation ARMA method outperformed the one-factor method based on real historic data. The secondary factor may have some effects on the main factor and thereby impact the forecasting results. Using fuzzified fluctuations rather than fuzzified real data could avoid the influence of extreme values in historic data, which performs negatively while forecasting. To verify the accuracy and effectiveness of the model, we also employed our method to forecast the Shanghai Stock Exchange Composite Index (SHSECI from 2001 to 2015 and the international gold price from 2000 to 2010.

  10. Image-based modelling of nutrient movement in and around the rhizosphere.

    Science.gov (United States)

    Daly, Keith R; Keyes, Samuel D; Masum, Shakil; Roose, Tiina

    2016-02-01

    In this study, we developed a spatially explicit model for nutrient uptake by root hairs based on X-ray computed tomography images of the rhizosphere soil structure. This work extends our previous work to larger domains and hence is valid for longer times. Unlike the model used previously, which considered only a small region of soil about the root, we considered an effectively infinite volume of bulk soil about the rhizosphere. We asked the question: At what distance away from root surfaces do the specific structural features of root-hair and soil aggregate morphology not matter because average properties start dominating the nutrient transport? The resulting model was used to capture bulk and rhizosphere soil properties by considering representative volumes of soil far from the root and adjacent to the root, respectively. By increasing the size of the volumes that we considered, the diffusive impedance of the bulk soil and root uptake were seen to converge. We did this for two different values of water content. We found that the size of region for which the nutrient uptake properties converged to a fixed value was dependent on the water saturation. In the fully saturated case, the region of soil we needed to consider was only of radius 1.1mm for poorly soil-mobile species such as phosphate. However, in the case of a partially saturated medium (relative saturation 0.3), we found that a radius of 1.4mm was necessary. This suggests that, in addition to the geometrical properties of the rhizosphere, there is an additional effect of soil moisture properties, which extends further from the root and may relate to other chemical changes in the rhizosphere. The latter were not explicitly included in our model. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  11. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  12. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  13. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  14. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  15. Sunburn and sun-protective behaviors among adults with and without previous nonmelanoma skin cancer: a population-based study

    Science.gov (United States)

    Fischer, Alexander H.; Wang, Timothy S.; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L.

    2016-01-01

    Background Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit UV exposure. Objective To determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. Methods We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (95% CI), taking into account the complex survey design. Results Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% versus 27.0%; aPOR=1.41; 1.16–1.71), long sleeves (20.5% versus 7.7%; aPOR=1.55; 1.21–1.98), a wide-brimmed hat (26.1% versus 10.5%; aPOR=1.52; 1.24–1.87), and sunscreen (53.7% versus 33.1%; aPOR=2.11; 95% CI=1.73–2.59), but did not have significantly lower odds of recent sunburn (29.7% versus 40.7%; aPOR=0.95; 0.77–1.17). Among subjects with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Limitations Self-reported cross-sectional data and unavailable information quantifying regular sun exposure. Conclusion Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. PMID:27198078

  16. Numerical simulation of the shot peening process under previous loading conditions

    International Nuclear Information System (INIS)

    Romero-Ángeles, B; Urriolagoitia-Sosa, G; Torres-San Miguel, C R; Molina-Ballinas, A; Benítez-García, H A; Vargas-Bustos, J A; Urriolagoitia-Calderón, G

    2015-01-01

    This research presents a numerical simulation of the shot peening process and determines the residual stress field induced into a component with a previous loading history. The importance of this analysis is based on the fact that mechanical elements under shot peening are also subjected to manufacturing processes, which convert raw material into finished product. However, material is not provided in a virgin state, it has a previous loading history caused by the manner it is fabricated. This condition could alter some beneficial aspects of the residual stress induced by shot peening and could accelerate the crack nucleation and propagation progression. Studies were performed in beams subjected to strain hardening in tension (5ε y ) before shot peening was applied. Latter results were then compared in a numerical assessment of an induced residual stress field by shot peening carried out in a component (beam) without any previous loading history. In this paper, it is clearly shown the detrimental or beneficial effect that previous loading history can bring to the mechanical component and how it can be controlled to improve the mechanical behavior of the material

  17. Modeling of an ionic polymer metal composite actuator based on an extended Kalman filter trained neural network

    International Nuclear Information System (INIS)

    Truong, Dinh Quang; Ahn, Kyoung Kwan

    2014-01-01

    An ion polymer metal composite (IPMC) is an electroactive polymer that bends in response to a small applied electric field as a result of mobility of cations in the polymer network and vice versa. This paper presents an innovative and accurate nonlinear black-box model (NBBM) for estimating the bending behavior of IPMC actuators. The model is constructed via a general multilayer perceptron neural network (GMLPNN) integrated with a smart learning mechanism (SLM) that is based on an extended Kalman filter with self-decoupling ability (SDEKF). Here the GMLPNN is built with an ability to autoadjust its structure based on its characteristic vector. Furthermore, by using the SLM based on the SDEKF, the GMLPNN parameters are optimized with small computational effort, and the modeling accuracy is improved. An apparatus employing an IPMC actuator is first set up to investigate the IPMC characteristics and to generate the data for training and validating the model. The advanced NBBM model for the IPMC system is then created with the proper inputs to estimate IPMC tip displacement. Next, the model is optimized using the SLM mechanism with the training data. Finally, the optimized NBBM model is verified with the validating data. A comparison between this model and the previously developed model is also carried out to prove the effectiveness of the proposed modeling technique. (paper)

  18. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data

  19. Stimulating Scientific Reasoning with Drawing-Based Modeling

    Science.gov (United States)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2018-01-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each…

  20. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  1. Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages

    Directory of Open Access Journals (Sweden)

    Kevin Holder

    2017-10-01

    Full Text Available For several decades, a wide-spread consensus concerning the enormous importance of an in-depth clarification of the specifications of a product has been observed. A weak clarification of specifications is repeatedly listed as a main cause for the failure of product development projects. Requirements, which can be defined as the purpose, goals, constraints, and criteria associated with a product development project, play a central role in the clarification of specifications. The collection of activities which ensure that requirements are identified, documented, maintained, communicated, and traced throughout the life cycle of a system, product, or service can be referred to as “requirements engineering”. These activities can be supported by a collection and combination of strategies, methods, and tools which are appropriate for the clarification of specifications. Numerous publications describe the strategy and the components of requirements management. Furthermore, recent research investigates its industrial application. Simultaneously, promising developments of graph-based design languages for a holistic digital representation of the product life cycle are presented. Current developments realize graph-based languages by the diagrams of the Unified Modelling Language (UML, and allow the automatic generation and evaluation of multiple product variants. The research presented in this paper seeks to present a method in order to combine the advantages of a conscious requirements management process and graph-based design languages. Consequently, the main objective of this paper is the investigation of a model-based integration of requirements in a product development process by means of graph-based design languages. The research method is based on an in-depth analysis of an exemplary industrial product development, a gear system for so-called “Electrical Multiple Units” (EMU. Important requirements were abstracted from a gear system

  2. Is previous disaster experience a good predictor for disaster preparedness in extreme poverty households in remote Muslim minority based community in China?

    Science.gov (United States)

    Chan, Emily Y Y; Kim, Jean H; Lin, Cherry; Cheung, Eliza Y L; Lee, Polly P Y

    2014-06-01

    Disaster preparedness is an important preventive strategy for protecting health and mitigating adverse health effects of unforeseen disasters. A multi-site based ethnic minority project (2009-2015) is set up to examine health and disaster preparedness related issues in remote, rural, disaster prone communities in China. The primary objective of this reported study is to examine if previous disaster experience significantly increases household disaster preparedness levels in remote villages in China. A cross-sectional, household survey was conducted in January 2011 in Gansu Province, in a predominately Hui minority-based village. Factors related to disaster preparedness were explored using quantitative methods. Two focus groups were also conducted to provide additional contextual explanations to the quantitative findings of this study. The village household response rate was 62.4 % (n = 133). Although previous disaster exposure was significantly associated with perception of living in a high disaster risk area (OR = 6.16), only 10.7 % households possessed a disaster emergency kit. Of note, for households with members who had non-communicable diseases, 9.6 % had prepared extra medications to sustain clinical management of their chronic conditions. This is the first study that examined disaster preparedness in an ethnic minority population in remote communities in rural China. Our results indicate the need of disaster mitigation education to promote preparedness in remote, resource-poor communities.

  3. Rapid acquisition and model-based analysis of cell-free transcription–translation reactions from nonmodel bacteria

    Science.gov (United States)

    Wienecke, Sarah; Ishwarbhai, Alka; Tsipa, Argyro; Aw, Rochelle; Kylilis, Nicolas; Bell, David J.; McClymont, David W.; Jensen, Kirsten; Biedendieck, Rebekka

    2018-01-01

    Native cell-free transcription–translation systems offer a rapid route to characterize the regulatory elements (promoters, transcription factors) for gene expression from nonmodel microbial hosts, which can be difficult to assess through traditional in vivo approaches. One such host, Bacillus megaterium, is a giant Gram-positive bacterium with potential biotechnology applications, although many of its regulatory elements remain uncharacterized. Here, we have developed a rapid automated platform for measuring and modeling in vitro cell-free reactions and have applied this to B. megaterium to quantify a range of ribosome binding site variants and previously uncharacterized endogenous constitutive and inducible promoters. To provide quantitative models for cell-free systems, we have also applied a Bayesian approach to infer ordinary differential equation model parameters by simultaneously using time-course data from multiple experimental conditions. Using this modeling framework, we were able to infer previously unknown transcription factor binding affinities and quantify the sharing of cell-free transcription–translation resources (energy, ribosomes, RNA polymerases, nucleotides, and amino acids) using a promoter competition experiment. This allows insights into resource limiting-factors in batch cell-free synthesis mode. Our combined automated and modeling platform allows for the rapid acquisition and model-based analysis of cell-free transcription–translation data from uncharacterized microbial cell hosts, as well as resource competition within cell-free systems, which potentially can be applied to a range of cell-free synthetic biology and biotechnology applications. PMID:29666238

  4. A new pattern associative memory model for image recognition based on Hebb rules and dot product

    Science.gov (United States)

    Gao, Mingyue; Deng, Limiao; Wang, Yanjiang

    2018-04-01

    A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.

  5. Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models

    OpenAIRE

    Lee, Jaehoon; Hulse, Nathan C.; Wood, Grant M.; Oniki, Thomas A.; Huff, Stanley M.

    2017-01-01

    In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM ...

  6. Mesoscopic effects in an agent-based bargaining model in regular lattices.

    Science.gov (United States)

    Poza, David J; Santos, José I; Galán, José M; López-Paredes, Adolfo

    2011-03-09

    The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.

  7. Mesoscopic effects in an agent-based bargaining model in regular lattices.

    Directory of Open Access Journals (Sweden)

    David J Poza

    Full Text Available The effect of spatial structure has been proved very relevant in repeated games. In this work we propose an agent based model where a fixed finite population of tagged agents play iteratively the Nash demand game in a regular lattice. The model extends the multiagent bargaining model by Axtell, Epstein and Young modifying the assumption of global interaction. Each agent is endowed with a memory and plays the best reply against the opponent's most frequent demand. We focus our analysis on the transient dynamics of the system, studying by computer simulation the set of states in which the system spends a considerable fraction of the time. The results show that all the possible persistent regimes in the global interaction model can also be observed in this spatial version. We also find that the mesoscopic properties of the interaction networks that the spatial distribution induces in the model have a significant impact on the diffusion of strategies, and can lead to new persistent regimes different from those found in previous research. In particular, community structure in the intratype interaction networks may cause that communities reach different persistent regimes as a consequence of the hindering diffusion effect of fluctuating agents at their borders.

  8. Fire Risk Scoping Study: Investigation of nuclear power plant fire risk, including previously unaddressed issues

    International Nuclear Information System (INIS)

    Lambright, J.A.; Nowlen, S.P.; Nicolette, V.F.; Bohn, M.P.

    1989-01-01

    An investigation of nuclear power plant fire risk issues raised as a result of the USNRC sponsored Fire Protection Research Program at Sandia National Laboratories has been performed. The specific objectives of this study were (1) to review and requantify fire risk scenarios from four fire probabilistic risk assessments (PRAs) in light of updated data bases made available as a result of USNRC sponsored Fire Protection Research Program and updated computer fire modeling capabilities, (2) to identify potentially significant fire risk issues that have not been previously addressed in a fire risk context and to quantify the potential impact of those identified fire risk issues where possible, and (3) to review current fire regulations and plant implementation practices for relevance to the identified unaddressed fire risk issues. In performance of the fire risk scenario requantifications several important insights were gained. It was found that utilization of a more extensive operational experience base resulted in both fire occurrence frequencies and fire duration times (i.e., time required for fire suppression) increasing significantly over those assumed in the original works. Additionally, some thermal damage threshold limits assumed in the original works were identified as being nonconservative based on more recent experimental data. Finally, application of the COMPBRN III fire growth model resulted in calculation of considerably longer fire damage times than those calculated in the original works using COMPBRN I. 14 refs., 2 figs., 16 tabs

  9. National Rates of Uterine Rupture are not Associated with Rates of Previous Caesarean Delivery

    DEFF Research Database (Denmark)

    Colmorn, Lotte B.; Langhoff-Roos, Jens; Jakobsson, Maija

    2017-01-01

    % of all Nordic deliveries. Information on the comparison population was retrieved from the national medical birth registers. Incidence rate ratios by previous caesarean delivery and intended mode of delivery after caesarean were modelled using Poisson regression. RESULTS: The incidence of uterine rupture......BACKGROUND: Previous caesarean delivery and intended mode of delivery after caesarean are well-known individual risk factors for uterine rupture. We examined if different national rates of uterine rupture are associated with differences in national rates of previous caesarean delivery and intended...... was 7.8/10 000 in Finland and 4.6/10 000 in Denmark. Rates of caesarean (21.3%) and previous caesarean deliveries (11.5%) were highest in Denmark, while the rate of intended vaginal delivery after caesarean was highest in Finland (72%). National rates of uterine rupture were not associated...

  10. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  11. INFORMATION SYSTEM QUALITY INFLUENCE ON ORGANIZATION PERFORMANCE: A MODIFICATION OF TECHNOLOGY-BASED INFORMATION SYSTEM ACCEPTANCE AND SUCCESS MODEL

    Directory of Open Access Journals (Sweden)

    Trisnawati N.

    2017-12-01

    Full Text Available This study aims to examine the effect of information system quality on technology-based accounting information systems usage and their impact on organizational performance on local government. This study is based on Technology Acceptance Model (TAM, IS Success Model, and the success of technology-based information systems. This study is a combination of previous studies conducted by Seddon and Kiew (1997, Saeed and Helm (2008, and DeLone and McLean (1992. This study used survey method and took 101 respondents from accounting staff working in Malang and Mojokerto regencies. This study uses Partial Least Square to examine research data. Research result exhibits information system qualities affecting benefit perception and user satisfaction. Technology-based accounting information systems usage in local government is influenced by benefits perception and user satisfaction. Research result concluded that technology-based accounting information systems usage will affect the performance of local government organizations.

  12. The Effects of the Previous Outcome on Probabilistic Choice in Rats

    Science.gov (United States)

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2014-01-01

    This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915

  13. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  14. Multiagent-Based Model For ESCM

    Directory of Open Access Journals (Sweden)

    Delia MARINCAS

    2011-01-01

    Full Text Available Web based applications for Supply Chain Management (SCM are now a necessity for every company in order to meet the increasing customer demands, to face the global competition and to make profit. Multiagent-based approach is appropriate for eSCM because it shows many of the characteristics a SCM system should have. For this reason, we have proposed a multiagent-based eSCM model which configures a virtual SC, automates the SC activities: selling, purchasing, manufacturing, planning, inventory, etc. This model will allow a better coordination of the supply chain network and will increase the effectiveness of Web and intel-ligent technologies employed in eSCM software.

  15. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  16. The export marketing-financing based on forfeiting model

    Directory of Open Access Journals (Sweden)

    Petrović Pero B.

    2004-01-01

    Full Text Available The contemporary business finance have a lot of modalities of involving in the international market with different options. Twenty years ago, financial aspects and insurance of export transactions especially in international practice have been successfully solved by application of forfeiting model. The forfeiting is a medium-term transaction mainly which basic subject is right to buy claim with maturity from 6 to 60 months, and mainly related to drafts. In essence, this transaction mean the purchase of securities in order to cover claims with maturity in future related to delivery of goods and services, mainly of export character without the owner's right to demand payment. The right to claim is based both on a draft (which is the most frequent subject of forfeiting because of its simple form and long tradition and on any other financial instrument. As a rule, the exporter is owner and seller of claim. He accept the draft as a cover for payment of exported goods or services in order to speed-up the collection, and transferring the risk of collection on forfeiter (any person buying a securities without the owner's right to demand payment from previous owner. As a compensation, he receive the reduced value of security and providing a necessary liquid assets immediately in that way. Buying a securities without the owner's right to demand payment from previous owner (exporter, the forfeiter accept all risks from exporter related to the collection in certain transaction.

  17. New population-based exome data are questioning the pathogenicity of previously cardiomyopathy-associated genetic variants

    DEFF Research Database (Denmark)

    Andreasen, Charlotte Hartig; Nielsen, Jonas B; Refsgaard, Lena

    2013-01-01

    Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated with these card......Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated...... with these cardiomyopathies, but the disease-causing effect of reported variants is often dubious. In order to identify possible false-positive variants, we investigated the prevalence of previously reported cardiomyopathy-associated variants in recently published exome data. We searched for reported missense and nonsense...... variants in the NHLBI-Go Exome Sequencing Project (ESP) containing exome data from 6500 individuals. In ESP, we identified 94 variants out of 687 (14%) variants previously associated with HCM, 58 out of 337 (17%) variants associated with DCM, and 38 variants out of 209 (18%) associated with ARVC...

  18. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  19. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  20. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  1. An agent-based model for emotion contagion and competition in online social media

    Science.gov (United States)

    Fan, Rui; Xu, Ke; Zhao, Jichang

    2018-04-01

    Recent studies suggest that human emotions diffuse in not only real-world communities but also online social media. However, a comprehensive model that considers up-to-date findings and multiple online social media mechanisms is still missing. To bridge this vital gap, an agent-based model, which concurrently considers emotion influence and tie strength preferences, is presented to simulate the emotion contagion and competition. Our model well reproduces patterns observed in the empirical data, like anger's preference on weak ties, anger-dominated users' high vitalities and angry tweets' short retweet intervals, and anger's competitiveness in negative events. The comparison with a previously presented baseline model further demonstrates its effectiveness in modeling online emotion contagion. It is also surprisingly revealed by our model that as the ratio of anger approaches joy with a gap less than 12%, anger will eventually dominate the online social media and arrives the collective outrage in the cyber space. The critical gap disclosed here can be indeed warning signals at early stages for outrage control. Our model would shed lights on the study of multiple issues regarding emotion contagion and competition in terms of computer simulations.

  2. A Model-Based Bayesian Estimation of the Rate of Evolution of VNTR Loci in Mycobacterium tuberculosis

    Science.gov (United States)

    Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.

    2012-01-01

    Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563

  3. Model-based Abstraction of Data Provenance

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  4. Data-based mathematical modeling of vectorial transport across double-transfected polarized cells.

    Science.gov (United States)

    Bartholomé, Kilian; Rius, Maria; Letschert, Katrin; Keller, Daniela; Timmer, Jens; Keppler, Dietrich

    2007-09-01

    Vectorial transport of endogenous small molecules, toxins, and drugs across polarized epithelial cells contributes to their half-life in the organism and to detoxification. To study vectorial transport in a quantitative manner, an in vitro model was used that includes polarized MDCKII cells stably expressing the recombinant human uptake transporter OATP1B3 in their basolateral membrane and the recombinant ATP-driven efflux pump ABCC2 in their apical membrane. These double-transfected cells enabled mathematical modeling of the vectorial transport of the anionic prototype substance bromosulfophthalein (BSP) that has frequently been used to examine hepatobiliary transport. Time-dependent analyses of (3)H-labeled BSP in the basolateral, intracellular, and apical compartments of cells cultured on filter membranes and efflux experiments in cells preloaded with BSP were performed. A mathematical model was fitted to the experimental data. Data-based modeling was optimized by including endogenous transport processes in addition to the recombinant transport proteins. The predominant contributions to the overall vectorial transport of BSP were mediated by OATP1B3 (44%) and ABCC2 (28%). Model comparison predicted a previously unrecognized endogenous basolateral efflux process as a negative contribution to total vectorial transport, amounting to 19%, which is in line with the detection of the basolateral efflux pump Abcc4 in MDCKII cells. Rate-determining steps in the vectorial transport were identified by calculating control coefficients. Data-based mathematical modeling of vectorial transport of BSP as a model substance resulted in a quantitative description of this process and its components. The same systems biology approach may be applied to other cellular systems and to different substances.

  5. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  6. Model-Based Knowing: How Do Students Ground Their Understanding About Climate Systems in Agent-Based Computer Models?

    Science.gov (United States)

    Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.

    2017-12-01

    This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.

  7. Predicting intraindividual changes in learning strategies: The effects of previous achievement

    OpenAIRE

    Buško, Vesna; Mujagić, Amela

    2013-01-01

    Socio-cognitive models of self-regulated learning (e.g., Pintrich, 2000) emphasize contextualized nature oflearning process, and within-person variation in learning processes, along with between-person variability in selfregulation.Previous studies about contextual nature of learning strategies have mostly focused on the effects ofdifferent contextual factors on interindividual differences in learning strategies utilization. However, less attentionwas given to the question about contextual ef...

  8. Increased risk of default among previously treated tuberculosis cases in the Western Cape Province, South Africa.

    Science.gov (United States)

    Marx, F M; Dunbar, R; Hesseling, A C; Enarson, D A; Fielding, K; Beyers, N

    2012-08-01

    To investigate, in two urban communities with high tuberculosis (TB) incidence and high rates of TB recurrence, whether a history of previous TB treatment is associated with treatment default. Retrospective cohort study of TB cases with an episode of treatment recorded in the clinic-based treatment registers between 2002 and 2007. Probabilistic record linkage was used to ascertain treatment history of TB cases back to 1996. Based on the outcome of their most recent previous treatment episode, previously treated cases were compared to new cases regarding their risk of treatment default. Previous treatment success (adjusted odds ratio [aOR] 1.79; 95%CI 1.17-2.73), previous default (aOR 6.18, 95%CI 3.68-10.36) and previous failure (aOR 9.72, 95%CI 3.07-30.78) were each independently associated with treatment default (P default were male sex (P = 0.003) and age 19-39 years (P risk of treatment default, even after previous successful treatment. This finding is of particular importance in a setting where recurrent TB is very common. Adherence to treatment should be ensured in new and retreatment cases to increase cure rates and reduce transmission of TB in the community.

  9. A model for the electronic support of practice-based research networks.

    Science.gov (United States)

    Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D

    2012-01-01

    The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the

  10. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  11. Internet-based system for simulation-based medical planning for cardiovascular disease.

    Science.gov (United States)

    Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A

    2003-06-01

    Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.

  12. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  13. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  14. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  15. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  16. How to model mutually exclusive events based on independent causal pathways in Bayesian network models

    OpenAIRE

    Fenton, N.; Neil, M.; Lagnado, D.; Marsh, W.; Yet, B.; Constantinou, A.

    2016-01-01

    We show that existing Bayesian network (BN) modelling techniques cannot capture the correct intuitive reasoning in the important case when a set of mutually exclusive events need to be modelled as separate nodes instead of states of a single node. A previously proposed ‘solution’, which introduces a simple constraint node that enforces mutual exclusivity, fails to preserve the prior probabilities of the events, while other proposed solutions involve major changes to the original model. We pro...

  17. Progressive Impairment of Lactate-based Gluconeogenesis in the Huntington?s Disease Mouse Model R6/2

    OpenAIRE

    Nielsen, Signe Marie Borch; Hasholt, Lis; N?rrem?lle, Anne; Josefsen, Knud

    2015-01-01

    Huntington?s disease (HD) is a neurodegenerative illness, where selective neuronal loss in the brain caused by expression of mutant huntingtin protein leads to motor dysfunction and cognitive decline in addition to peripheral metabolic changes. In this study we confirm our previous observation of impairment of lactate-based hepatic gluconeogenesis in the transgenic HD mouse model R6/2 and determine that the defect manifests very early and progresses in severity with disease development, indic...

  18. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek; Mü nch, Andreas; Sü li, Endre; Wagner, Barbara

    2016-01-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  19. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  20. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  1. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  2. A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes

    Science.gov (United States)

    Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung

    2016-01-01

    Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of

  3. A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes.

    Directory of Open Access Journals (Sweden)

    Samuel Sunghwan Cho

    Full Text Available Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs. However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods

  4. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  5. Exploratory analysis regarding the domain definitions for computer based analytical models

    Science.gov (United States)

    Raicu, A.; Oanta, E.; Barhalescu, M.

    2017-08-01

    Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.

  6. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  7. Alternate source term models for Yucca Mountain performance assessment based on natural analog data and secondary mineral solubility

    International Nuclear Information System (INIS)

    Murphy, W.M.; Codell, R.B.

    1999-01-01

    Performance assessment calculations for the proposed high level radioactive waste repository at Yucca Mountain, Nevada, were conducted using the Nuclear Regulatory Commission Total-System Performance Assessment (TPA 3.2) code to test conceptual models and parameter values for the source term based on data from the Pena Blanca, Mexico, natural analog site and based on a model for coprecipitation and solubility of secondary schoepite. In previous studies the value for the maximum constant oxidative alteration rate of uraninite at the Nopal I uranium body at Pena Blanca was estimated. Scaling this rate to the mass of uranium for the proposed Yucca Mountain repository yields an oxidative alteration rate of 22 kg/y, which was assumed to be an upper limit on the release rate from the proposed repository. A second model was developed assuming releases of radionuclides are based on the solubility of secondary schoepite as a function of temperature and solution chemistry. Releases of uranium are given by the product of uranium concentrations at equilibrium with schoepite and the flow of water through the waste packages. For both models, radionuclides other than uranium and those in the cladding and gap fraction were modeled to be released at a rate proportional to the uranium release rate, with additional elemental solubility limits applied. Performance assessment results using the Pena Blanca oxidation rate and schoepite solubility models for Yucca Mountain were compared to the TPA 3.2 base case model, in which release was based on laboratory studies of spent fuel dissolution, cladding and gap release, and solubility limits. Doses calculated using the release rate based on natural analog data and the schoepite solubility models were smaller than doses generated using the base case model. These results provide a degree of confidence in safety predictions using the base case model and an indication of how conservatism in the base case model may be reduced in future analyses

  8. Alternate source term models for Yucca Mountain performance assessment based on natural analog data and secondary mineral solubility

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, W.M.; Codell, R.B.

    1999-07-01

    Performance assessment calculations for the proposed high level radioactive waste repository at Yucca Mountain, Nevada, were conducted using the Nuclear Regulatory Commission Total-System Performance Assessment (TPA 3.2) code to test conceptual models and parameter values for the source term based on data from the Pena Blanca, Mexico, natural analog site and based on a model for coprecipitation and solubility of secondary schoepite. In previous studies the value for the maximum constant oxidative alteration rate of uraninite at the Nopal I uranium body at Pena Blanca was estimated. Scaling this rate to the mass of uranium for the proposed Yucca Mountain repository yields an oxidative alteration rate of 22 kg/y, which was assumed to be an upper limit on the release rate from the proposed repository. A second model was developed assuming releases of radionuclides are based on the solubility of secondary schoepite as a function of temperature and solution chemistry. Releases of uranium are given by the product of uranium concentrations at equilibrium with schoepite and the flow of water through the waste packages. For both models, radionuclides other than uranium and those in the cladding and gap fraction were modeled to be released at a rate proportional to the uranium release rate, with additional elemental solubility limits applied. Performance assessment results using the Pena Blanca oxidation rate and schoepite solubility models for Yucca Mountain were compared to the TPA 3.2 base case model, in which release was based on laboratory studies of spent fuel dissolution, cladding and gap release, and solubility limits. Doses calculated using the release rate based on natural analog data and the schoepite solubility models were smaller than doses generated using the base case model. These results provide a degree of confidence in safety predictions using the base case model and an indication of how conservatism in the base case model may be reduced in future analyses.

  9. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  10. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  11. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  12. Space vector-based modeling and control of a modular multilevel converter in HVDC applications

    DEFF Research Database (Denmark)

    Bonavoglia, M.; Casadei, G.; Zarri, L.

    2013-01-01

    Modular multilevel converter (MMC) is an emerging multilevel topology for high-voltage applications that has been developed in recent years. In this paper, the modeling and the control of MMCs are restated in terms of space vectors, which may allow a deeper understanding of the converter behavior....... As a result, a control scheme for three-phase MMCs based on the previous theoretical analysis is presented. Numerical simulations are used to test its feasibility.......Modular multilevel converter (MMC) is an emerging multilevel topology for high-voltage applications that has been developed in recent years. In this paper, the modeling and the control of MMCs are restated in terms of space vectors, which may allow a deeper understanding of the converter behavior...

  13. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  14. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  15. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  16. Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

    Science.gov (United States)

    Carpenter, Gail A; Gaddam, Sai Chaitanya

    2010-04-01

    Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Analysis of the Explanatory Variables of the Differences in Perceptions of Cyberbullying: A Role-Based-Model Approach.

    Science.gov (United States)

    Fernández-Antelo, Inmaculada; Cuadrado-Gordillo, Isabel

    2018-04-01

    The controversies that exist regarding the delimitation of the cyberbullying construct demonstrate the need for further research focused on determining the criteria that shape the structure of the perceptions that adolescents have of this phenomenon and on seeking explanations of this behavior. The objectives of this study were to (a) construct possible explanatory models of the perception of cyberbullying from identifying and relating the criteria that form this construct and (b) analyze the influence of previous cyber victimization and cyber aggression experiences in the construction of explanatory models of the perception of cyberbullying. The sample consisted of 2,148 adolescents (49.1% girls; SD = 0.5) aged from 12 to 16 years ( M = 13.9 years; SD = 1.2). The results have shown that previous cyber victimization and cyber aggression experiences lead to major differences in the explanatory models to interpret cyber-abusive behavior as cyberbullying episodes, or as social relationship mechanisms, or as a revenge reaction. We note that the aggressors' explanatory model is based primarily on a strong reciprocal relationship between the imbalance of power and intentionality, that it functions as a link promoting indirect causal relationships of the anonymity and repetition factors with the cyberbullying construct. The victims' perceptual structure is based on three criteria-imbalance of power, intentionality, and publicity-where the key factor in this structure is the intention to harm. These results allow to design more effective measures of prevention and intervention closely tailored to addressing directly the factors that are considered to be predictors of risk.

  18. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  19. Validation of a simple dynamic thermal performance characterization model based on the piston flow concept for flat-plate solar collectors

    DEFF Research Database (Denmark)

    Deng, Jie; Yang, Ming; Ma, Rongjiang

    2016-01-01

    dynamic model based on the first-order difference method is compared to that of the numerical solution of the collector ordinary differential equation (ODE) model using the fourth-order Runge-Kutta method. The improved thermal inertia model (TIM) on the basis of closed-form solution presented by Deng et....... (2012) for the model turns out to be the collector static response time constant τC by analytical derivation. The nonlinear least squares method is applied to determine the characteristic parameters of a flat-plate solar air collector previously tested by the authors. Then the obtained parameters...

  20. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  1. A model-based exploration of the role of pattern generating circuits during locomotor adaptation.

    Science.gov (United States)

    Marjaninejad, Ali; Finley, James M

    2016-08-01

    In this study, we used a model-based approach to explore the potential contributions of central pattern generating circuits (CPGs) during adaptation to external perturbations during locomotion. We constructed a neuromechanical modeled of locomotion using a reduced-phase CPG controller and an inverted pendulum mechanical model. Two different forms of locomotor adaptation were examined in this study: split-belt treadmill adaptation and adaptation to a unilateral, elastic force field. For each simulation, we first examined the effects of phase resetting and varying the model's initial conditions on the resulting adaptation. After evaluating the effect of phase resetting on the adaptation of step length symmetry, we examined the extent to which the results from these simple models could explain previous experimental observations. We found that adaptation of step length symmetry during split-belt treadmill walking could be reproduced using our model, but this model failed to replicate patterns of adaptation observed in response to force field perturbations. Given that spinal animal models can adapt to both of these types of perturbations, our findings suggest that there may be distinct features of pattern generating circuits that mediate each form of adaptation.

  2. Spatial variability in nutrient transport by HUC8, state, and subbasin based on Mississippi/Atchafalaya River Basin SPARROW models

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.; Schwarz, Gregory E.

    2014-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. With geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and monitored loads throughout the MARB, SPAtially Referenced Regression On Watershed attributes (SPARROW) watershed models were constructed specifically for the MARB, which reduced simulation errors from previous models. Based on these models, N loads/yields were highest from the central part (centered over Iowa and Indiana) of the MARB (Corn Belt), and the highest P yields were scattered throughout the MARB. Spatial differences in yields from previous studies resulted from different descriptions of the dominant sources (N yields are highest with crop-oriented agriculture and P yields are highest with crop and animal agriculture and major WWTPs) and different descriptions of downstream transport. Delivered loads/yields from the MARB SPARROW models are used to rank subbasins, states, and eight-digit Hydrologic Unit Code basins (HUC8s) by N and P contributions and then rankings are compared with those from other studies. Changes in delivered yields result in an average absolute change of 1.3 (N) and 1.9 (P) places in state ranking and 41 (N) and 69 (P) places in HUC8 ranking from those made with previous national-scale SPARROW models. This information may help managers decide where efforts could have the largest effects (highest ranked areas) and thus reduce hypoxia in the Gulf of Mexico.

  3. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  4. Agent-based model with multi-level herding for complex financial systems

    Science.gov (United States)

    Chen, Jun-Jie; Tan, Lei; Zheng, Bo

    2015-02-01

    In complex financial systems, the sector structure and volatility clustering are respectively important features of the spatial and temporal correlations. However, the microscopic generation mechanism of the sector structure is not yet understood. Especially, how to produce these two features in one model remains challenging. We introduce a novel interaction mechanism, i.e., the multi-level herding, in constructing an agent-based model to investigate the sector structure combined with volatility clustering. According to the previous market performance, agents trade in groups, and their herding behavior comprises the herding at stock, sector and market levels. Further, we propose methods to determine the key model parameters from historical market data, rather than from statistical fitting of the results. From the simulation, we obtain the sector structure and volatility clustering, as well as the eigenvalue distribution of the cross-correlation matrix, for the New York and Hong Kong stock exchanges. These properties are in agreement with the empirical ones. Our results quantitatively reveal that the multi-level herding is the microscopic generation mechanism of the sector structure, and provide new insight into the spatio-temporal interactions in financial systems at the microscopic level.

  5. Flow Formulation-based Model for the Curriculum-based Course Timetabling Problem

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias

    2015-01-01

    problem. This decreases the number of integer variables signicantly and improves the performance compared to the basic formulation. It also shows competitiveness with other approaches based on mixed integer programming from the literature and improves the currently best known lower bound on one data...... instance in the benchmark data set from the second international timetabling competition.......In this work we will present a new mixed integer programming formulation for the curriculum-based course timetabling problem. We show that the model contains an underlying network model by dividing the problem into two models and then connecting the two models back into one model using a maximum ow...

  6. Model-based reasoning technology for the power industry

    International Nuclear Information System (INIS)

    Touchton, R.A.; Subramanyan, N.S.; Naser, J.A.

    1991-01-01

    This paper reports on model-based reasoning which refers to an expert system implementation methodology that uses a model of the system which is being reasoned about. Model-based representation and reasoning techniques offer many advantages and are highly suitable for domains where the individual components, their interconnection, and their behavior is well-known. Technology Applications, Inc. (TAI), under contract to the Electric Power Research Institute (EPRI), investigated the use of model-based reasoning in the power industry including the nuclear power industry. During this project, a model-based monitoring and diagnostic tool, called ProSys, was developed. Also, an alarm prioritization system was developed as a demonstration prototype

  7. Multidimensional modelling of anaerobic granules

    DEFF Research Database (Denmark)

    Picioreanu, C.; Batstone, Damien J.; van Loosdrecht, M.C.M.

    2005-01-01

    A multispecies, two- and three-dimensional model was developed, based on a previously published planar biofilm model, and the biochemical structure of the ADM1. Several soluble substrates diffuse and react in the granule. Local pH is calculated from acid-base equilibria and charge balance. The mo...

  8. Total hip arthroplasty after a previous pelvic osteotomy: A systematic review and meta-analysis.

    Science.gov (United States)

    Shigemura, T; Yamamoto, Y; Murata, Y; Sato, T; Tsuchiya, R; Wada, Y

    2018-06-01

    There are several reports regarding total hip arthroplasty (THA) after a previous pelvic osteotomy (PO). However, to our knowledge, until now there has been no formal systematic review and meta-analysis published to summarize the clinical results of THA after a previous PO. Therefore, we conducted a systematic review and meta-analysis of results of THA after a previous PO. We focus on these questions as follows: does a previous PO affect the results of subsequent THA, such as clinical outcomes, operative time, operative blood loss, and radiological parameters. Using PubMed, Web of Science, and Cochrane Library, we searched for relevant original papers. The pooling of data was performed using RevMan software (version 5.3, Cochrane Collaboration, Oxford, UK). A p-value50%, significant heterogeneity was assumed and a random-effects model was applied for the meta-analysis. A fixed-effects model was applied in the absence of significant heterogeneity. Eleven studies were included in this meta-analysis. The pooled results indicated that there was no significant difference in postoperative Merle D'Aubigne-Postel score (I 2 =0%, SMD=-0.15, 95% CI: -0.36 to 0.06, p=0.17), postoperative Harris hip score (I 2 =60%, SMD=-0.23, 95% CI: -0.50 to 0.05, p=0.10), operative time (I 2 =86%, SMD=0.37, 95% CI: -0.09 to 0.82, p=0.11), operative blood loss (I 2 =82%, SMD=0.23, 95% CI: -0.17 to 0.63, p=0.25), and cup abduction angle (I 2 =43%, SMD=-0.08, 95% CI: -0.25 to 0.09, p=0.38) between THA with and without a previous PO. However, cup anteversion angle of THA with a previous PO was significantly smaller than that of without a previous PO (I 2 =77%, SMD=-0.63, 95% CI: -1.13 to -0.13, p=0.01). Systematic review and meta-analysis of results of THA after a previous PO was performed. A previous PO did not affect the results of subsequent THA, except for cup anteversion. Because of the low quality evidence currently available, high-quality randomized controlled trials are required

  9. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  10. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  11. Preliminary Multivariable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  12. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  13. Model-based Prognostics with Concurrent Damage Progression Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — Model-based prognostics approaches rely on physics-based models that describe the behavior of systems and their components. These models must account for the several...

  14. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  15. The association between previous and future severe exacerbations of chronic obstructive pulmonary disease: Updating the literature using robust statistical methodology.

    Science.gov (United States)

    Sadatsafavi, Mohsen; Xie, Hui; Etminan, Mahyar; Johnson, Kate; FitzGerald, J Mark

    2018-01-01

    There is minimal evidence on the extent to which the occurrence of a severe acute exacerbation of COPD that results in hospitalization affects the subsequent disease course. Previous studies on this topic did not generate causally-interpretable estimates. Our aim was to use corrected methodology to update previously reported estimates of the associations between previous and future exacerbations in these patients. Using administrative health data in British Columbia, Canada (1997-2012), we constructed a cohort of patients with at least one severe exacerbation, defined as an episode of inpatient care with the main diagnosis of COPD based on international classification of diseases (ICD) codes. We applied a random-effects 'joint frailty' survival model that is particularly developed for the analysis of recurrent events in the presence of competing risk of death and heterogeneity among individuals in their rate of events. Previous severe exacerbations entered the model as dummy-coded time-dependent covariates, and the model was adjusted for several observable patient and disease characteristics. 35,994 individuals (mean age at baseline 73.7, 49.8% female, average follow-up 3.21 years) contributed 34,271 severe exacerbations during follow-up. The first event was associated with a hazard ratio (HR) of 1.75 (95%CI 1.69-1.82) for the risk of future severe exacerbations. This risk decreased to HR = 1.36 (95%CI 1.30-1.42) for the second event and to 1.18 (95%CI 1.12-1.25) for the third event. The first two severe exacerbations that occurred during follow-up were also significantly associated with increased risk of all-cause mortality. There was substantial heterogeneity in the individual-specific rate of severe exacerbations. Even after adjusting for observable characteristics, individuals in the 97.5th percentile of exacerbation rate had 5.6 times higher rate of severe exacerbations than those in the 2.5th percentile. Using robust statistical methodology that controlled

  16. Model-based optimization biofilm based systems performing autotrophic nitrogen removal using the comprehensive NDHA model

    DEFF Research Database (Denmark)

    Valverde Pérez, Borja; Ma, Yunjie; Morset, Martin

    Completely autotrophic nitrogen removal (CANR) can be obtained in single stage biofilm-based bioreactors. However, their environmental footprint is compromised due to elevated N2O emissions. We developed novel spatially explicit biochemical process model of biofilm based CANR systems that predicts...

  17. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components. © 2011 ACM.

  18. Multi-Domain Modeling Based on Modelica

    Directory of Open Access Journals (Sweden)

    Liu Jun

    2016-01-01

    Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.

  19. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    Science.gov (United States)

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant

  20. Testing R&D-Based Endogenous Growth Models

    DEFF Research Database (Denmark)

    Kruse-Andersen, Peter Kjær

    2017-01-01

    R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...

  1. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell....... Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested....

  2. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  3. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  4. Model-Based Integration and Interpretation of Data

    DEFF Research Database (Denmark)

    Petersen, Johannes

    2004-01-01

    Data integration and interpretation plays a crucial role in supervisory control. The paper defines a set of generic inference steps for the data integration and interpretation process based on a three-layer model of system representations. The three-layer model is used to clarify the combination...... of constraint and object-centered representations of the work domain throwing new light on the basic principles underlying the data integration and interpretation process of Rasmussen's abstraction hierarchy as well as other model-based approaches combining constraint and object-centered representations. Based...

  5. CEAI: CCM-based email authorship identification model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2013-11-01

    Full Text Available In this paper we present a model for email authorship identification (EAI by employing a Cluster-based Classification (CCM technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature set to include some more interesting and effective features for email authorship identification (e.g., the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell. We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM-based models, as well as the models proposed by Iqbal et al. (2010, 2013 [1,2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors’ constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1,2].

  6. 49 CFR 173.23 - Previously authorized packaging.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...

  7. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Guidelines for visualizing and annotating rule-based models.

    Science.gov (United States)

    Chylek, Lily A; Hu, Bin; Blinov, Michael L; Emonet, Thierry; Faeder, James R; Goldstein, Byron; Gutenkunst, Ryan N; Haugh, Jason M; Lipniacki, Tomasz; Posner, Richard G; Yang, Jin; Hlavacek, William S

    2011-10-01

    Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.

  9. Model-Based Design and Evaluation of a Brachiating Monkey Robot with an Active Waist

    Directory of Open Access Journals (Sweden)

    Alex Kai-Yuan Lo

    2017-09-01

    Full Text Available We report on the model-based development of a monkey robot that is capable of performing continuous brachiation locomotion on swingable rod, as the intermediate step toward studying brachiation on the soft rope or on horizontal ropes with both ends fixed. The work is different from other previous works where the model or the robot swings on fixed bars. The model, which is composed of two rigid links, was inspired by the dynamic motion of primates. The model further served as the design guideline for a robot that has five degree of freedoms: two on each arm for rod changing and one on the waist to initiate a swing motion. The model was quantitatively formulated, and its dynamic behavior was analyzed in simulation. Further, a two-stage controller was developed within the simulation environment, where the first stage used the natural dynamics of a two-link pendulum-like model, and the second stage used the angular velocity feedback to regulate the waist motion. Finally, the robot was empirically built and evaluated. The experimental results confirm that the robot can perform model-like swing behavior and continuous brachiation locomotion on rods.

  10. Sequence-based model of gap gene regulatory network.

    Science.gov (United States)

    Kozlov, Konstantin; Gursky, Vitaly; Kulakovskiy, Ivan; Samsonova, Maria

    2014-01-01

    The detailed analysis of transcriptional regulation is crucially important for understanding biological processes. The gap gene network in Drosophila attracts large interest among researches studying mechanisms of transcriptional regulation. It implements the most upstream regulatory layer of the segmentation gene network. The knowledge of molecular mechanisms involved in gap gene regulation is far less complete than that of genetics of the system. Mathematical modeling goes beyond insights gained by genetics and molecular approaches. It allows us to reconstruct wild-type gene expression patterns in silico, infer underlying regulatory mechanism and prove its sufficiency. We developed a new model that provides a dynamical description of gap gene regulatory systems, using detailed DNA-based information, as well as spatial transcription factor concentration data at varying time points. We showed that this model correctly reproduces gap gene expression patterns in wild type embryos and is able to predict gap expression patterns in Kr mutants and four reporter constructs. We used four-fold cross validation test and fitting to random dataset to validate the model and proof its sufficiency in data description. The identifiability analysis showed that most model parameters are well identifiable. We reconstructed the gap gene network topology and studied the impact of individual transcription factor binding sites on the model output. We measured this impact by calculating the site regulatory weight as a normalized difference between the residual sum of squares error for the set of all annotated sites and for the set with the site of interest excluded. The reconstructed topology of the gap gene network is in agreement with previous modeling results and data from literature. We showed that 1) the regulatory weights of transcription factor binding sites show very weak correlation with their PWM score; 2) sites with low regulatory weight are important for the model output; 3

  11. Automated extraction of knowledge for model-based diagnostics

    Science.gov (United States)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  12. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  13. Interactive object modelling based on piecewise planar surface patches.

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-06-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms.

  14. Interactive object modelling based on piecewise planar surface patches☆

    Science.gov (United States)

    Prankl, Johann; Zillich, Michael; Vincze, Markus

    2013-01-01

    Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms. PMID:24511219

  15. Model-based accelerator controls: What, why and how

    International Nuclear Information System (INIS)

    Sidhu, S.S.

    1987-01-01

    Model-based control is defined as a gamut of techniques whose aim is to improve the reliability of an accelerator and enhance the capabilities of the operator, and therefore of the whole control system. The aim of model-based control is seen as gradually moving the function of model-reference from the operator to the computer. The role of the operator in accelerator control and the need for and application of model-based control are briefly summarized

  16. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  17. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  18. Sunburn and sun-protective behaviors among adults with and without previous nonmelanoma skin cancer (NMSC): A population-based study.

    Science.gov (United States)

    Fischer, Alexander H; Wang, Timothy S; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L

    2016-08-01

    Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit ultraviolet exposure. We sought to determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (CI), taking into account the complex survey design. Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% vs 27.0%; aPOR 1.41; 95% CI 1.16-1.71), long sleeves (20.5% vs 7.7%; aPOR 1.55; 95% CI 1.21-1.98), a wide-brimmed hat (26.1% vs 10.5%; aPOR 1.52; 95% CI 1.24-1.87), and sunscreen (53.7% vs 33.1%; aPOR 2.11; 95% CI 1.73-2.59), but did not have significantly lower odds of recent sunburn (29.7% vs 40.7%; aPOR 0.95; 95% CI 0.77-1.17). Among those with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Self-reported cross-sectional data and unavailable information quantifying regular sun exposure are limitations. Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  19. Establishing verbal repertoires in children with autism using function-based video modeling.

    Science.gov (United States)

    Plavnick, Joshua B; Ferreri, Summer J

    2011-01-01

    Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition.

  20. Learning of Chemical Equilibrium through Modelling-Based Teaching

    Science.gov (United States)

    Maia, Poliana Flavia; Justi, Rosaria

    2009-01-01

    This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students…

  1. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  3. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  4. The Design of Model-Based Training Programs

    Science.gov (United States)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  5. An assessment of CFD-based wall heat transfer models in piston engines

    Energy Technology Data Exchange (ETDEWEB)

    Sircar, Arpan [Pennsylvania State Univ., University Park, PA (United States); Paul, Chandan [Pennsylvania State Univ., University Park, PA (United States); Ferreyro-Fernandez, Sebastian [Pennsylvania State Univ., University Park, PA (United States); Imren, Abdurrahman [Pennsylvania State Univ., University Park, PA (United States); Haworth, Daniel C [Pennsylvania State Univ., University Park, PA (United States)

    2017-04-26

    The lack of accurate submodels for in-cylinder heat transfer has been identified as a key shortcoming in developing truly predictive, physics-based computational fluid dynamics (CFD) models that can be used to develop combustion systems for advanced high-efficiency, low-emissions engines. Only recently have experimental methods become available that enable accurate near-wall measurements to enhance simulation capability via advancing models. Initial results show crank-angle dependent discrepancies with respect to previously used boundary-layer models of up to 100%. However, available experimental data is quite sparse (only few data points on engine walls) and limited (available measurements are those of heat flux only). Predictive submodels are needed for medium-resolution ("engineering") LES and for unsteady Reynolds-averaged simulations (URANS). Recently, some research groups have performed DNS studies on engine-relevant conditions using simple geometries. These provide very useful data for benchmarking wall heat transfer models under such conditions. Further, a number of new and more sophisticated models have also become available in the literature which account for these engine-like conditions. Some of these have been incorporated while others of a more complex nature, which include solving additional partial differential equations (PDEs) within the thin boundary layer near the wall, are underway. These models will then be tested against the available DNS/experimental data in both SI (spark-ignition) and CI (compression-ignition) engines.

  6. Analytical and experimental discussion of a circuit-based model for compact fluorescent lamps in a 60Hz power grid

    Directory of Open Access Journals (Sweden)

    Gabriel Alexis Malagon

    2015-06-01

    Full Text Available This article presents an analysis and discussion on the performance of a circuit-based model for Compact Fluorescent Lamps (CFL in a 120V 60Hz power grid. This model is proposed and validated in previous scientific literature for CFLs in 230V 50Hz systems. Nevertheless, the derivation of this model is not straightforward to follow and its performance in 120V 60Hz systems is a matter of research work. In this paper, the analytical derivation of this CFL model is presented in detail and its performance is discussed when predicting the current of a CFL designed to operate in a 120V 60Hz electrical system. The derived model is separately implemented in both MATLAB® and ATP-EMTP® software using two different sets of parameters previously proposed for 230V 50Hz CFLs. These simulation results are compared against laboratory measurements using a programmable AC voltage source. The measurements and simulations considered seven CFLs 110/127V 60Hz with different power ratings supplied by a sinusoidal (not distorted voltage source. The simulations under these conditions do not properly predict the current measurements and therefore the set of parameters and/or the model itself need to be adjusted for 120V 60Hz power grids.

  7. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    Science.gov (United States)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  8. Facilitating Change to a Problem-based Model

    DEFF Research Database (Denmark)

    Kolmos, Anette

    2002-01-01

    The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model.......The paper presents the barriers which arise during the change process from a traditional educational system to a problem-based educational model....

  9. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  10. Pixel-based meshfree modelling of skeletal muscles.

    Science.gov (United States)

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  11. The Global Fund to Fight AIDS, Tuberculosis and Malaria's investments in harm reduction through the rounds-based funding model (2002-2014)

    DEFF Research Database (Denmark)

    Bridge, Jamie; Hunter, Benjamin M; Albers, Eliot

    2016-01-01

    investment of US$ 620. million. Two-thirds of this budgeted amount was for interventions in the "comprehensive package" defined by the United Nations. 91% of the identified amount was for Eastern Europe and Asia. Conclusion: This study represents an updated, comprehensive assessment of Global Fund...... and inequitable access to these services and face widespread stigma and discrimination. In 2013, the Global Fund launched a new funding model-signalling the end of the previous rounds-based model that had operated since its founding in 2002. This study updates previous analyses to assess Global Fund investments...... investments in harm reduction from its founding (2002) until the start of the new funding model (2014). It also highlights the overall shortfall of harm reduction funding, with the estimated global need being US$ 2.3. billion for harm reduction in 2015 alone. Using this baseline, the Global Fund must...

  12. Do emotional intelligence and previous caring experience influence student nurse performance? A comparative analysis.

    Science.gov (United States)

    Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie

    2016-08-01

    Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated

  13. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  14. Per-Oral Endoscopic Myotomy (POEM) After Previous Laparoscopic Heller Myotomy Is Feasible and Safe in a Porcine Model.

    Science.gov (United States)

    Miles, Luke F; Frelich, Matthew J; Gould, Jon C; Dua, Kulwinder S; Jensen, Eric S; Kastenmeier, Andrew S

    2015-10-01

    We sought to evaluate the feasibility, safety, and difficulty of performing the per-oral endoscopic myotomy (POEM) procedure in the setting of a prior Heller myotomy using a survival porcine model. Four pigs underwent laparoscopic Heller myotomy with Dor partial anterior fundoplication followed by the POEM performed 4 weeks later. Two additional pigs served as controls, undergoing only the POEM. All procedures were completed without complications. The revisional POEM was not significantly more difficult than POEM controls based on procedure time, POEM procedure components, or procedure difficulty scores. Revisional POEM had a longer mean operative time when compared with Heller myotomy (126.0 vs. 83.8 min; PHeller myotomy is safe and feasible in the porcine model and has potential as an option for patients suffering from recurrent or persistent symptoms after failed surgical myotomy.

  15. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  16. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  17. 78 FR 36089 - Airworthiness Directives; Hawker Beechcraft Corporation (Type Certificate Previously Held by...

    Science.gov (United States)

    2013-06-17

    ... Corporation (Type Certificate Previously Held by Raytheon Aircraft Company) Model BAe.125 Series 800A... structural damage or lead to divergent flutter, and result in loss of integrity of the wing, loss of control... to divergent flutter, and result in loss of integrity of the wing, loss of control of the airplane...

  18. A feature-based approach to modeling protein-protein interaction hot spots.

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-05-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.

  19. A feature-based approach to modeling protein–protein interaction hot spots

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-01-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to π–related interactions, especially π · · · π interactions. PMID:19273533

  20. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling

    Directory of Open Access Journals (Sweden)

    Nan-Hung Hsieh

    2018-06-01

    Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational

  1. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  2. Sandboxes for Model-Based Inquiry

    Science.gov (United States)

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-04-01

    In this article, we introduce a class of constructionist learning environments that we call Emergent Systems Sandboxes ( ESSs), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual construction environment that support students in creating, exploring, and sharing computational models of dynamic systems that exhibit emergent phenomena. They provide learners with "entity"-level construction primitives that reflect an underlying scientific model. These primitives can be directly "painted" into a sandbox space, where they can then be combined, arranged, and manipulated to construct complex systems and explore the emergent properties of those systems. We argue that ESSs offer a means of addressing some of the key barriers to adopting rich, constructionist model-based inquiry approaches in science classrooms at scale. Situating the ESS in a large-scale science modeling curriculum we are implementing across the USA, we describe how the unique "entity-level" primitive design of an ESS facilitates knowledge system refinement at both an individual and social level, we describe how it supports flexible modeling practices by providing both continuous and discrete modes of executability, and we illustrate how it offers students a variety of opportunities for validating their qualitative understandings of emergent systems as they develop.

  3. Ibrutinib versus previous standard of care: an adjusted comparison in patients with relapsed/refractory chronic lymphocytic leukaemia.

    Science.gov (United States)

    Hansson, Lotta; Asklid, Anna; Diels, Joris; Eketorp-Sylvan, Sandra; Repits, Johanna; Søltoft, Frans; Jäger, Ulrich; Österborg, Anders

    2017-10-01

    This study explored the relative efficacy of ibrutinib versus previous standard-of-care treatments in relapsed/refractory patients with chronic lymphocytic leukaemia (CLL), using multivariate regression modelling to adjust for baseline prognostic factors. Individual patient data were collected from an observational Stockholm cohort of consecutive patients (n = 144) diagnosed with CLL between 2002 and 2013 who had received at least second-line treatment. Data were compared with results of the RESONATE clinical trial. A multivariate Cox proportional hazards regression model was used which estimated the hazard ratio (HR) of ibrutinib versus previous standard of care. The adjusted HR of ibrutinib versus the previous standard-of-care cohort was 0.15 (p ibrutinib in the RESONATE study were significantly longer than with previous standard-of-care regimens used in second or later lines in routine healthcare. The approach used, which must be interpreted with caution, compares patient-level data from a clinical trial with outcomes observed in a daily clinical practice and may complement results from randomised trials or provide preliminary wider comparative information until phase 3 data exist.

  4. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    Science.gov (United States)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  5. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  6. Agent-based modeling and simulation of clean heating system adoption in Norway

    Energy Technology Data Exchange (ETDEWEB)

    Sopha, Bertha Maya

    2011-03-15

    A sound climate policy encouraging clean energy investment is important to mitigate global warming. Previous research has demonstrated that consumer choice indeed plays an important role in adoption of sustainable technologies. This thesis strives to gain a better understanding of consumers' decision-making on heating systems and to explore the potential application of agent-based modeling (ABM) in exploring mechanism underlying adoption in which heating system adoption by Norwegian households is taken up as a case study. An interdisciplinary approach, applying various established theories including those of psychology, is applied to create a model for consumer behavior and implement this behavior in an Agent-Based Model (ABM) to simulate heating technology diffusion. A mail-survey, carried out in autumn 2008, is a means to collect information for parameterizing the agent-based model, for gaining empirical facts, and for validating the developed model at micro-level. Survey sample consisted of 1500 Norwegian households drawn from population register and 1500 wood pellet users in Norway. The response rates were 10.3% and 34.6% for population sample and wood pellet sample respectively. This study is divided into two parts; empirical analysis and agent-based simulation. The empirical analysis aims at fully understanding the important aspects of adoption decision and their implications, in order to assist simulation. The analysis particularly contributes to the identification of differences/similarities between adopters and non adopters of wood pellet heating with respects to some key points of adoption derived from different theories, psychological factors underlying the adoption-decision of wood pellet heating, and the rationales underlying Norwegian households' decisions regarding their future heating system. The simulation study aims at exploring the mechanism of heterogeneous household decision-making giving rise to the diffusion of heating systems, and

  7. Generative embedding for model-based classification of fMRI data.

    Directory of Open Access Journals (Sweden)

    Kay H Brodersen

    2011-06-01

    Full Text Available Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI. The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in 'hidden' physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and

  8. Transcriptomic analysis in a Drosophila model identifies previously implicated and novel pathways in the therapeutic mechanism in neuropsychiatric disorders

    Directory of Open Access Journals (Sweden)

    Priyanka eSingh

    2011-03-01

    Full Text Available We have taken advantage of a newly described Drosophila model to gain insights into the potential mechanism of antiepileptic drugs (AEDs, a group of drugs that are widely used in the treatment of several neurological and psychiatric conditions besides epilepsy. In the recently described Drosophila model that is inspired by pentylenetetrazole (PTZ induced kindling epileptogenesis in rodents, chronic PTZ treatment for seven days causes a decreased climbing speed and an altered CNS transcriptome, with the latter mimicking gene expression alterations reported in epileptogenesis. In the model, an increased climbing speed is further observed seven days after withdrawal from chronic PTZ. We used this post-PTZ withdrawal regime to identify potential AED mechanism. In this regime, treatment with each of the five AEDs tested, namely, ethosuximide (ETH, gabapentin (GBP, vigabatrin (VGB, sodium valproate (NaVP and levetiracetam (LEV, resulted in rescuing of the altered climbing behavior. The AEDs also normalized PTZ withdrawal induced transcriptomic perturbation in fly heads; whereas AED untreated flies showed a large number of up- and down-regulated genes which were enriched in several processes including gene expression and cell communication, the AED treated flies showed differential expression of only a small number of genes that did not enrich gene expression and cell communication processes. Gene expression and cell communication related upregulated genes in AED untreated flies overrepresented several pathways - spliceosome, RNA degradation, and ribosome in the former category, and inositol phosphate metabolism, phosphatidylinositol signaling, endocytosis and hedgehog signaling in the latter. Transcriptome remodeling effect of AEDs was overall confirmed by microarray clustering that clearly separated the profiles of AED treated and untreated flies. Besides being consistent with previously implicated pathways, our results provide evidence for a role of

  9. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  10. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  11. Improving the Xin'anjiang hydrological model based on mass–energy balance

    Directory of Open Access Journals (Sweden)

    Y.-H. Fang

    2017-07-01

    Full Text Available Conceptual hydrological models are preferable for real-time flood forecasting, among which the Xin'anjiang (XAJ model has been widely applied in humid and semi-humid regions of China. Although the relatively simple mass balance scheme ensures a good performance of runoff simulation during flood events, the model still has some defects. Previous studies have confirmed the importance of evapotranspiration (ET and soil moisture content (SMC in runoff simulation. In order to add more constraints to the original XAJ model, an energy balance scheme suitable for the XAJ model was developed and coupled with the original mass balance scheme of the XAJ model. The detailed parameterizations of the improved model, XAJ-EB, are presented in the first part of this paper. XAJ-EB employs various meteorological forcing and remote sensing data as input, simulating ET and runoff yield using a more physically based mass–energy balance scheme. In particular, the energy balance is solved by determining the representative equilibrium temperature (RET, which is comparable to land surface temperature (LST. The XAJ-EB was evaluated in the Lushui catchment situated in the middle reach of the Yangtze River basin for the period between 2004 and 2007. Validation using ground-measured runoff data proves that the XAJ-EB is capable of reproducing runoff comparable to the original XAJ model. Additionally, RET simulated by XAJ-EB agreed well with moderate resolution imaging spectroradiometer (MODIS-retrieved LST, which further confirms that the model is able to simulate the mass–energy balance since LST reflects the interactions among various processes. The validation results prove that the XAJ-EB model has superior performance compared with the XAJ model and also extends its applicability.

  12. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  13. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  14. A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions

    International Nuclear Information System (INIS)

    Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng

    2014-01-01

    Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information

  15. Artificial intelligence-based computer modeling tools for controlling slag foaming in electric arc furnaces

    Science.gov (United States)

    Wilson, Eric Lee

    Due to increased competition in a world economy, steel companies are currently interested in developing techniques that will allow for the improvement of the steelmaking process, either by increasing output efficiency or by improving the quality of their product, or both. Slag foaming is one practice that has been shown to contribute to both these goals. However, slag foaming is highly dynamic and difficult to model or control. This dissertation describes an effort to use artificial intelligence-based tools (genetic algorithms, fuzzy logic, and neural networks) to both model and control the slag foaming process. Specifically, a neural network is trained and tested on slag foaming data provided by a steel plant. This neural network model is then controlled by a fuzzy logic controller, which in turn is optimized by a genetic algorithm. This tuned controller is then installed at a steel plant and given control be a more efficient slag foaming controller than what was previously used by the steel plant.

  16. The Role of Birth/Previously Adopted Children in Families Choosing to Adopt Children with Special Needs.

    Science.gov (United States)

    Mullin, Ellen Steele; Johnson, LeAnne

    1999-01-01

    Notes that successful child placement depends on engaging birth or previously adopted children during the adoption process, yet other children are often overlooked when parents are adopting a special-needs child. Presents a model which recognizes dynamics of strength and vulnerability and applies that model to preparing and supporting the adoptive…

  17. Segment-based Eyring-Wilson viscosity model for polymer solutions

    International Nuclear Information System (INIS)

    Sadeghi, Rahmat

    2005-01-01

    A theory-based model is presented for correlating viscosity of polymer solutions and is based on the segment-based Eyring mixture viscosity model as well as the segment-based Wilson model for describing deviations from ideality. The model has been applied to several polymer solutions and the results show that it is reliable both for correlation and prediction of the viscosity of polymer solutions at different molar masses and temperature of the polymer

  18. Exploring C-water-temperature interactions and non-linearities in soils through developments in process-based models

    Science.gov (United States)

    Esteban Moyano, Fernando; Vasilyeva, Nadezda; Menichetti, Lorenzo

    2016-04-01

    Soil carbon models developed over the last couple of decades are limited in their capacity to accurately predict the magnitudes and temporal variations in observed carbon fluxes and stocks. New process-based models are now emerging that attempt to address the shortcomings of their more simple, empirical counterparts. While a spectrum of ideas and hypothetical mechanisms are finding their way into new models, the addition of only a few processes known to significantly affect soil carbon (e.g. enzymatic decomposition, adsorption, Michaelis-Menten kinetics) has shown the potential to resolve a number of previous model-data discrepancies (e.g. priming, Birch effects). Through model-data validation, such models are a means of testing hypothetical mechanisms. In addition, they can lead to new insights into what soil carbon pools are and how they respond to external drivers. In this study we develop a model of soil carbon dynamics based on enzymatic decomposition and other key features of process based models, i.e. simulation of carbon in particulate, soluble and adsorbed states, as well as enzyme and microbial components. Here we focus on understanding how moisture affects C decomposition at different levels, both directly (e.g. by limiting diffusion) or through interactions with other components. As the medium where most reactions and transport take place, water is central en every aspect of soil C dynamics. We compare results from a number of alternative models with experimental data in order to test different processes and parameterizations. Among other observations, we try to understand: 1. typical moisture response curves and associated temporal changes, 2. moisture-temperature interactions, and 3. diffusion effects under changing C concentrations. While the model aims at being a process based approach and at simulating fluxes at short time scales, it remains a simplified representation using the same inputs as classical soil C models, and is thus potentially

  19. A compression-based model of musical learning

    DEFF Research Database (Denmark)

    Meredith, David

    for the most satisfying (usually the most economical) interpretation of the new work. This is modelled as the modification of a pre-existing program, P, that computes some corpus (i.e., a compact encoding of the corpus), so that it can additionally compute the object to be interpreted. In other words...... in musical perception. The feasibility of this view is demonstrated in a computational model which is applied to the first book of J. S. Bach’s Das Wohltemperirte Clavier. This model pre-processes the data using the author’s PS13s1 pitch spelling algorithm [4,5], then applies a modified version of the author......’s COSIATEC algorithm [6] to derive compact encodings of works that maximise reuse of previous encodings. The resulting analyses will be presented and discussed. References [1] Pomerantz, J. R. and Kubovy, M. (1986). Theoretical approaches to perceptual organization: Simplicity and likelihood principles. In...

  20. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    Science.gov (United States)

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object

  1. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    Science.gov (United States)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based

  2. Language acquisition is model-based rather than model-free.

    Science.gov (United States)

    Wang, Felix Hao; Mintz, Toben H

    2016-01-01

    Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.

  3. Double-balloon catheter for induction of labour in women with a previous cesarean section, could it be the best choice?

    Science.gov (United States)

    De Bonrostro Torralba, Carlos; Tejero Cabrejas, Eva Lucía; Marti Gamboa, Sabina; Lapresta Moros, María; Campillos Maza, Jose Manuel; Castán Mateo, Sergio

    2017-05-01

    We analysed the efficacy and safety of double-balloon catheter for cervical ripening in women with a previous cesarean section and which were the most important variables associated with an increased risk of repeated cesarean delivery. We designed an observational retrospective study of 418 women with unfavourable cervices (Bishop Score cesarean delivery, and induction of labour with a double-balloon catheter. Baseline maternal data and perinatal outcomes were recorded for a descriptive, bivariate, and multivariate analysis. A p value cesarean section were dystocia in the previous pregnancy (OR 1.744; CI 95% 1.066-2.846), the absence of previous vaginal delivery (OR 2.590; CI 95% 1.066-6.290), suspected fetal macrosomia (OR 2.410; CI 95% 0.959-6.054), and duration of oxytocin induction period (OR 1.005; CI 95% 1.004-1.006). The area under the curve was 0.789 (p cesarean delivery and unfavourable cervix. In our study, most women could have a vaginal delivery in spite of their risk factors for cesarean delivery. A multivariate model based on some clinical variables has moderate predictive value for intrapartum cesarean section.

  4. A Framework for Effective Assessment of Model-based Projections of Biodiversity to Inform the Next Generation of Global Conservation Targets

    Science.gov (United States)

    Myers, B.; Beard, T. D.; Weiskopf, S. R.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.; Casey, K.; Lenton, T. M.; Leidner, A. K.; Ruane, A. C.; Ferrier, S.; Serbin, S.; Matsuda, H.; Shiklomanov, A. N.; Rosa, I.

    2017-12-01

    Biodiversity and ecosystems services underpin political targets for the conservation of biodiversity; however, previous incarnations of these biodiversity-related targets have not relied on integrated model based projections of possible outcomes based on climate and land use change. Although a few global biodiversity models are available, most biodiversity models lie along a continuum of geography and components of biodiversity. Model-based projections of the future of global biodiversity are critical to support policymakers in the development of informed global conservation targets, but the scientific community lacks a clear strategy for integrating diverse data streams in developing, and evaluating the performance of, such biodiversity models. Therefore, in this paper, we propose a framework for ongoing testing and refinement of model-based projections of biodiversity trends and change, by linking a broad variety of biodiversity models with data streams generated by advances in remote sensing, coupled with new and emerging in-situ observation technologies to inform development of essential biodiversity variables, future global biodiversity targets, and indicators. Our two main objectives are to (1) develop a framework for model testing and refining projections of a broad range of biodiversity models, focusing on global models, through the integration of diverse data streams and (2) identify the realistic outputs that can be developed and determine coupled approaches using remote sensing and new and emerging in-situ observations (e.g., metagenomics) to better inform the next generation of global biodiversity targets.

  5. NMR-based phytochemical analysis of Vitis vinifera cv Falanghina leaves. Characterization of a previously undescribed biflavonoid with antiproliferative activity.

    Science.gov (United States)

    Tartaglione, Luciana; Gambuti, Angelita; De Cicco, Paola; Ercolano, Giuseppe; Ianaro, Angela; Taglialatela-Scafati, Orazio; Moio, Luigi; Forino, Martino

    2018-03-01

    Vitis vinifera cv Falanghina is an ancient grape variety of Southern Italy. A thorough phytochemical analysis of the Falanghina leaves was conducted to investigate its specialised metabolite content. Along with already known molecules, such as caftaric acid, quercetin-3-O-β-d-glucopyranoside, quercetin-3-O-β-d-glucuronide, kaempferol-3-O-β-d-glucopyranoside and kaempferol-3-O-β-d-glucuronide, a previously undescribed biflavonoid was identified. For this last compound, a moderate bioactivity against metastatic melanoma cells proliferation was discovered. This datum can be of some interest to researchers studying human melanoma. The high content in antioxidant glycosylated flavonoids supports the exploitation of grape vine leaves as an inexpensive source of natural products for the food industry and for both pharmaceutical and nutraceutical companies. Additionally, this study offers important insights into the plant physiology, thus prompting possible technological researches of genetic selection based on the vine adaptation to specific pedo-climatic environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. 77 FR 33622 - Airworthiness Directives; Alpha Aviation Concept Limited (Type Certificate Previously Held by...

    Science.gov (United States)

    2012-06-07

    ... Airworthiness Directives; Alpha Aviation Concept Limited (Type Certificate Previously Held by Alpha Aviation... Aviation Concept Limited Model R2160 Airplanes. This AD results from mandatory continuing airworthiness... condition on an aviation product. The MCAI describes the unsafe condition as oil lines fitted to affected...

  7. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  8. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  9. Towards a standard model for research in agent-based modeling and simulation

    Directory of Open Access Journals (Sweden)

    Nuno Fachada

    2015-11-01

    Full Text Available Agent-based modeling (ABM is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. ABMs are very sensitive to implementation details. Thus, it is very easy to inadvertently introduce changes which modify model dynamics. Such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. In this paper, we present PPHPC, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. This paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output, also providing a canonical implementation of PPHPC. The paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their ABMs.

  10. A symmetry model for genetic coding via a wallpaper group composed of the traditional four bases and an imaginary base E: towards category theory-like systematization of molecular/genetic biology.

    Science.gov (United States)

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-05-07

    Previously, we suggested prototypal models that describe some clinical states based on group postulates. Here, we demonstrate a group/category theory-like model for molecular/genetic biology as an alternative application of our previous model. Specifically, we focus on deoxyribonucleic acid (DNA) base sequences. We construct a wallpaper pattern based on a five-letter cruciform motif with letters C, A, T, G, and E. Whereas the first four letters represent the standard DNA bases, the fifth is introduced for ease in formulating group operations that reproduce insertions and deletions of DNA base sequences. A basic group Z5 = {r, u, d, l, n} of operations is defined for the wallpaper pattern, with which a sequence of points can be generated corresponding to changes of a base in a DNA sequence by following the orbit of a point of the pattern under operations in group Z5. Other manipulations of DNA sequence can be treated using a vector-like notation 'Dj' corresponding to a DNA sequence but based on the five-letter base set; also, 'Dj's are expressed graphically. Insertions and deletions of a series of letters 'E' are admitted to assist in describing DNA recombination. Likewise, a vector-like notation Rj can be constructed for sequences of ribonucleic acid (RNA). The wallpaper group B = {Z5×∞, ●} (an ∞-fold Cartesian product of Z5) acts on Dj (or Rj) yielding changes to Dj (or Rj) denoted by 'Dj◦B(j→k) = Dk' (or 'Rj◦B(j→k) = Rk'). Based on the operations of this group, two types of groups-a modulo 5 linear group and a rotational group over the Gaussian plane, acting on the five bases-are linked as parts of the wallpaper group for broader applications. As a result, changes, insertions/deletions and DNA (RNA) recombination (partial/total conversion) are described. As an exploratory study, a notation for the canonical "central dogma" via a category theory-like way is presented for future developments. Despite the large incompleteness of our

  11. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  12. INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?

    Science.gov (United States)

    Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P

    2015-01-01

    Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.

  13. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  14. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  15. The Finslerian wormhole models

    Energy Technology Data Exchange (ETDEWEB)

    Rahaman, Farook; Paul, Nupur; Banerjee, Ayan [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India); De, S.S. [University of Calcutta, Department of Applied Mathematics, Kolkata, West Bengal (India); Ray, Saibal [Government College of Engineering and Ceramic Technology, Department of Physics, Kolkata, West Bengal (India); Usmani, A.A. [Aligarh Muslim University, Department of Physics, Aligarh, Uttar Pradesh (India)

    2016-05-15

    We present models of wormhole under the Finslerian structure of spacetime. This is a sequel of our previous work (Eur Phys J 75:564, 2015) where we constructed a toy model for compact stars based on the Finslerian spacetime geometry. In the present investigation, a wide variety of solutions are obtained, which explore the wormhole geometry by considering different choices for the form function and energy density. The solutions, like in the previous work, are revealed to be physically interesting and viable models for the explanation of wormholes as far as the background theory and literature are concerned. (orig.)

  16. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  17. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  18. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT

    Science.gov (United States)

    O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.

    2018-02-01

    Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5  ⩽  N  ⩽  9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5  ±  4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while

  19. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  20. Prospective memory and its correlates and predictors in schizophrenia: an extension of previous findings.

    Science.gov (United States)

    Ungvari, Gabor S; Xiang, Yu-Tao; Tang, Wai-Kwong; Shum, David

    2008-09-01

    Prospective memory (PM) is the ability to remember to do something in the future without explicit prompts. Extending the number of subjects and the scope of our previously published study, this investigation examined the relationship between PM and socio-demographic and clinical factors, activities of daily living (ADL) and frontal lobe functions in patients with chronic schizophrenia. One hundred and ten Chinese schizophrenia patients, 60 from the previous study and 50 additional patients recruited for this study, and 110 matched healthy comparison subjects (HC) formed the study sample. Patients' clinical condition and activity of daily living were evaluated with the Brief Psychiatric Rating Scale (BPRS) and the Functional Needs Assessment (FNA). Time- and event-based PM tasks and three tests of prefrontal lobe functions (Design Fluency Test [DFT], Tower of London [TOL], Wisconsin Card Sorting Test [WCST]) were also administered. Patients' level of ADL and psychopathology were not associated with PM functions and only anticholinergic medications (ACM) showed a significant negative correlational relationship with PM tasks. Confirming the findings of the previous study, patients performed significantly more poorly on all two PM tasks than HC. Performance on time-based PM task significantly correlated with age, education level and DFT in HC and with age, DFT, TOL and WCST in patients. Patients' performance on the event-based PM correlated with DFT and one measure of WCST. In patients, TOL and age predicted the performance on time-based PM task; DFT and WCST predicted the event-based task. Involving a large sample of patients with matched controls, this study confirmed that PM is impaired in chronic schizophrenia. Deficient PM functions were related to prefrontal lobe dysfunction in both HC and patients but not to the patients' clinical condition, nor did they significantly affect ADL. ACMs determined certain aspects of PM.

  1. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  2. Application of a process-based shallow landslide hazard model over a broad area in Central Italy

    Science.gov (United States)

    Gioia, Eleonora; Speranza, Gabriella; Ferretti, Maurizio; Godt, Jonathan W.; Baum, Rex L.; Marincioni, Fausto

    2015-01-01

    Process-based models are widely used for rainfall-induced shallow landslide forecasting. Previous studies have successfully applied the U.S. Geological Survey’s Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability (TRIGRS) model (Baum et al. 2002) to compute infiltration-driven changes in the hillslopes’ factor of safety on small scales (i.e., tens of square kilometers). Soil data input for such models are difficult to obtain across larger regions. This work describes a novel methodology for the application of TRIGRS over broad areas with relatively uniform hydrogeological properties. The study area is a 550-km2 region in Central Italy covered by post-orogenic Quaternary sediments. Due to the lack of field data, we assigned mechanical and hydrological property values through a statistical analysis based on literature review of soils matching the local lithologies. We calibrated the model using rainfall data from 25 historical rainfall events that triggered landslides. We compared the variation of pressure head and factor of safety with the landslide occurrence to identify the best fitting input conditions. Using calibrated inputs and a soil depth model, we ran TRIGRS for the study area. Receiver operating characteristic (ROC) analysis, comparing the model’s output with a shallow landslide inventory, shows that TRIGRS effectively simulated the instability conditions in the post-orogenic complex during historical rainfall scenarios. The implication of this work is that rainfall-induced landslides over large regions may be predicted by a deterministic model, even where data on geotechnical and hydraulic properties as well as temporal changes in topography or subsurface conditions are not available.

  3. Risk-based systems analysis for emerging technologies: Applications of a technology risk assessment model to public decision making

    International Nuclear Information System (INIS)

    Quadrel, M.J.; Fowler, K.M.; Cameron, R.; Treat, R.J.; McCormack, W.D.; Cruse, J.

    1995-01-01

    The risk-based systems analysis model was designed to establish funding priorities among competing technologies for tank waste remediation. The model addresses a gap in the Department of Energy's (DOE's) ''toolkit'' for establishing funding priorities among emerging technologies by providing disciplined risk and cost assessments of candidate technologies within the context of a complete remediation system. The model is comprised of a risk and cost assessment and a decision interface. The former assesses the potential reductions in risk and cost offered by new technology relative to the baseline risk and cost of an entire system. The latter places this critical information in context of other values articulated by decision makers and stakeholders in the DOE system. The risk assessment portion of the model is demonstrated for two candidate technologies for tank waste retrieval (arm-based mechanical retrieval -- the ''long reach arm'') and subsurface barriers (close-coupled chemical barriers). Relative changes from the base case in cost and risk are presented for these two technologies to illustrate how the model works. The model and associated software build on previous work performed for DOE's Office of Technology Development and the former Underground Storage Tank Integrated Demonstration, and complement a decision making tool presented at Waste Management 1994 for integrating technical judgements and non-technical (stakeholder) values when making technology funding decisions

  4. Free web-based modelling platform for managed aquifer recharge (MAR) applications

    Science.gov (United States)

    Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia

    2017-04-01

    Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online

  5. System Dynamics as Model-Based Theory Building

    OpenAIRE

    Schwaninger, Markus; Grösser, Stefan N.

    2008-01-01

    This paper introduces model-based theory building as a feature of system dynamics (SD) with large potential. It presents a systemic approach to actualizing that potential, thereby opening up a new perspective on theory building in the social sciences. The question addressed is if and how SD enables the construction of high-quality theories. This contribution is based on field experiment type projects which have been focused on model-based theory building, specifically the construction of a mi...

  6. Search-based model identification of smart-structure damage

    Science.gov (United States)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  7. Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun

    2017-03-01

    The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.

  8. 76 FR 41432 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Science.gov (United States)

    2011-07-14

    ... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Model Galaxy, Gulfstream... proposed AD. Discussion The Civil Aviation Authority (CAA), which is the aviation authority for Israel, has... Held by Israel Aircraft Industries, Ltd.): Docket No. FAA-2011-0716; Directorate Identifier 2011-NM-013...

  9. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A conflict-based model of color categorical perception: evidence from a priming study.

    Science.gov (United States)

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  11. Targeting Alzheimer's disease by investigating previously unexplored chemical space surrounding the cholinesterase inhibitor donepezil

    CSIR Research Space (South Africa)

    Van Greunen, DG

    2017-02-01

    Full Text Available A series of twenty seven acetylcholinesterase inhibitors, as potential agents for the treatment of Alzheimer's disease, were designed and synthesised based upon previously unexplored chemical space surrounding the molecular skeleton of the drug...

  12. Linking agent-based models and stochastic models of financial markets.

    Science.gov (United States)

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  13. New molecular descriptors based on local properties at the molecular surface and a boiling-point model derived from them.

    Science.gov (United States)

    Ehresmann, Bernd; de Groot, Marcel J; Alex, Alexander; Clark, Timothy

    2004-01-01

    New molecular descriptors based on statistical descriptions of the local ionization potential, local electron affinity, and the local polarizability at the surface of the molecule are proposed. The significance of these descriptors has been tested by calculating them for the Maybridge database in addition to our set of 26 descriptors reported previously. The new descriptors show little correlation with those already in use. Furthermore, the principal components of the extended set of descriptors for the Maybridge data show that especially the descriptors based on the local electron affinity extend the variance in our set of descriptors, which we have previously shown to be relevant to physical properties. The first nine principal components are shown to be most significant. As an example of the usefulness of the new descriptors, we have set up a QSPR model for boiling points using both the old and new descriptors.

  14. Previous medical history of diseases in children with attention deficit hyperactivity disorder and their parents

    Directory of Open Access Journals (Sweden)

    Ayyoub Malek

    2014-02-01

    Full Text Available Introduction: The etiology of Attention deficit hyperactivity disorder (ADHD is complex and most likely includes genetic and environmental factors. This study was conducted to evaluatethe role of previous medical history of diseases in ADHD children and their parents during theearlier years of the ADHD children's lives. Methods: In this case-control study, 164 ADHD children attending to Child and AdolescentPsychiatric Clinics of Tabriz University of Medical Sciences, Iran, compared with 166 normal children selected in a random-cluster method from primary and guidance schools. ADHDrating scale (Parents version and clinical interview based on schedule for Schedule forAffective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version(K-SADS were used to diagnose ADHD cases and to select the control group. Two groupswere compared for the existence of previous medical history of diseases in children andparents. Fisher's exact test and logistic regression model were used for data analysis. Results: The frequency of maternal history of medical disorders (28.7% vs. 12.0%; P = 0.001was significantly higher in children with ADHD compared with the control group. The frequency of jaundice, dysentery, epilepsy, asthma, allergy, and head trauma in the medicalhistory of children were not significantly differed between the two groups. Conclusion: According to this preliminary study, it may be concluded that the maternal historyof medical disorders is one of contributing risk factors for ADHD.

  15. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Science.gov (United States)

    Lepore, C.; Arnone, E.; Noto, L. V.; Sivandran, G.; Bras, R. L.

    2013-09-01

    This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution), is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS), which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS) to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  17. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    Directory of Open Access Journals (Sweden)

    C. Lepore

    2013-09-01

    Full Text Available This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution, is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS, which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  18. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. L-Py: an L-System simulation framework for modeling plant development based on a dynamic language

    Directory of Open Access Journals (Sweden)

    Frederic eBoudon

    2012-05-01

    Full Text Available The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e. languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: i by keeping a simple syntax while allowing for high-level programming constructs, ii by making code execution easy and avoiding compilation overhead iii allowing a high level of model reusability and the building of complex modular models iv and by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom.

  20. Simple Models for Model-based Portfolio Load Balancing Controller Synthesis

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Mølbak, Tommy; Bendtsen, Jan Dimon

    2010-01-01

    of generation units existing in an electrical power supply network, for instance in model-based predictive control or declarative control schemes. We focus on the effectuators found in the Danish power system. In particular, the paper presents models for boiler load, district heating, condensate throttling...