WorldWideScience

Sample records for computer modeling describes

  1. Computer modeling describes gravity-related adaptation in cell cultures.

    Science.gov (United States)

    Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny

    2009-12-16

    Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.

  2. Computer-aided Nonlinear Control System Design Using Describing Function Models

    CERN Document Server

    Nassirharand, Amir

    2012-01-01

    A systematic computer-aided approach provides a versatile setting for the control engineer to overcome the complications of controller design for highly nonlinear systems. Computer-aided Nonlinear Control System Design provides such an approach based on the use of describing functions. The text deals with a large class of nonlinear systems without restrictions on the system order, the number of inputs and/or outputs or the number, type or arrangement of nonlinear terms. The strongly software-oriented methods detailed facilitate fulfillment of tight performance requirements and help the designer to think in purely nonlinear terms, avoiding the expedient of linearization which can impose substantial and unrealistic model limitations and drive up the cost of the final product. Design procedures are presented in a step-by-step algorithmic format each step being a functional unit with outputs that drive the other steps. This procedure may be easily implemented on a digital computer with example problems from mecha...

  3. An algorithm to detect and communicate the differences in computational models describing biological systems.

    Science.gov (United States)

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  4. Computational Models Describing Possible Mechanisms for Generation of Excessive Beta Oscillations in Parkinson's Disease.

    Directory of Open Access Journals (Sweden)

    Alex Pavlides

    2015-12-01

    Full Text Available In Parkinson's disease, an increase in beta oscillations within the basal ganglia nuclei has been shown to be associated with difficulty in movement initiation. An important role in the generation of these oscillations is thought to be played by the motor cortex and by a network composed of the subthalamic nucleus (STN and the external segment of globus pallidus (GPe. Several alternative models have been proposed to describe the mechanisms for generation of the Parkinsonian beta oscillations. However, a recent experimental study of Tachibana and colleagues yielded results which are challenging for all published computational models of beta generation. That study investigated how the presence of beta oscillations in a primate model of Parkinson's disease is affected by blocking different connections of the STN-GPe circuit. Due to a large number of experimental conditions, the study provides strong constraints that any mechanistic model of beta generation should satisfy. In this paper we present two models consistent with the data of Tachibana et al. The first model assumes that Parkinsonian beta oscillation are generated in the cortex and the STN-GPe circuits resonates at this frequency. The second model additionally assumes that the feedback from STN-GPe circuit to cortex is important for maintaining the oscillations in the network. Predictions are made about experimental evidence that is required to differentiate between the two models, both of which are able to reproduce firing rates, oscillation frequency and effects of lesions carried out by Tachibana and colleagues. Furthermore, an analysis of the models reveals how the amplitude and frequency of the generated oscillations depend on parameters.

  5. Describing and Enhancing Collaboration at the Computer

    Directory of Open Access Journals (Sweden)

    Ken Beatty

    2002-06-01

    Full Text Available Computer-based learning materials differ from classroom practice in that they seldom explicitly offer opportunities for collaboration. Despite this, students do collaborate, helping one another through the content and affordances of computer materials. But, in doing so, students meet with challenges. Paradoxically, these challenges can either inspire or discourage learning and second-language acquisition. This paper, based on research with twenty Hong Kong university students in a controlled experiment, evaluates challenges to collaboration at the computer as evidenced by discourse. The students were videotaped and their discourse transcribed and evaluated both qualitatively and quantitatively, according to a set of discourse markers created to describe collaborative, non-collaborative and ambiguous strategies. The paper begins by exploring the differences between collaboration and similar terms such as teamwork and cooperative learning then goes on to define collaboration in the context of computer-assisted learning. It ends by presenting practical suggestions for software designers, teachers and students to enhance collaboration at the computer.

  6. Describing and Enhancing Collaboration at the Computer

    OpenAIRE

    Ken Beatty

    2002-01-01

    Computer-based learning materials differ from classroom practice in that they seldom explicitly offer opportunities for collaboration. Despite this, students do collaborate, helping one another through the content and affordances of computer materials. But, in doing so, students meet with challenges. Paradoxically, these challenges can either inspire or discourage learning and second-language acquisition. This paper, based on research with twenty Hong Kong university students in a controlled ...

  7. Analytical simulation platform describing projections in computed tomography systems

    International Nuclear Information System (INIS)

    Youn, Hanbean; Kim, Ho Kyung

    2013-01-01

    To reduce the patient dose, several approaches such as spectral imaging using photon counting detectors and statistical image reconstruction, are being considered. Although image-reconstruction algorithms may significantly enhance image quality in reconstructed images with low dose, true signal-to-noise properties are mainly determined by image quality in projections. We are developing an analytical simulation platform describing projections to investigate how quantum-interaction physics in each component configuring CT systems affect image quality in projections. This simulator will be very useful for an improved design or optimization of CT systems in economy as well as the development of novel image-reconstruction algorithms. In this study, we present the progress of development of the simulation platform with an emphasis on the theoretical framework describing the generation of projection data. We have prepared the analytical simulation platform describing projections in computed tomography systems. The remained further study before the meeting includes the following: Each stage in the cascaded signal-transfer model for obtaining projections will be validated by the Monte Carlo simulations. We will build up energy-dependent scatter and pixel-crosstalk kernels, and show their effects on image quality in projections and reconstructed images. We will investigate the effects of projections obtained from various imaging conditions and system (or detector) operation parameters on reconstructed images. It is challenging to include the interaction physics due to photon-counting detectors into the simulation platform. Detailed descriptions of the simulator will be presented with discussions on its performance and limitation as well as Monte Carlo validations. Computational cost will also be addressed in detail. The proposed method in this study is simple and can be used conveniently in lab environment

  8. Frameworks for understanding and describing business models

    DEFF Research Database (Denmark)

    Nielsen, Christian; Roslender, Robin

    2014-01-01

    This chapter provides in a chronological fashion an introduction to six frameworks that one can apply to describing, understanding and also potentially innovating business models. These six frameworks have been chosen carefully as they represent six very different perspectives on business models...... and in this manner “complement” each other. There are a multitude of varying frameworks that could be chosen from and we urge the reader to search and trial these for themselves. The six chosen models (year of release in parenthesis) are: • Service-Profit Chain (1994) • Strategic Systems Auditing (1997) • Strategy...... Maps (2001) • Intellectual Capital Statements (2003) • Chesbrough’s framework for Open Business Models (2006) • Business Model Canvas (2008)...

  9. Model checking biological systems described using ambient calculus

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Priami, Corrado; Qualia, Paola

    2005-01-01

    Model checking biological systems described using ambient calculus. In Proc. of the second International Workshop on Computational Methods in Systems Biology (CMSB04), Lecture Notes in Bioinformatics 3082:85-103, Springer, 2005.......Model checking biological systems described using ambient calculus. In Proc. of the second International Workshop on Computational Methods in Systems Biology (CMSB04), Lecture Notes in Bioinformatics 3082:85-103, Springer, 2005....

  10. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  11. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel

    . Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  12. Using Metaphorical Models for Describing Glaciers

    Science.gov (United States)

    Felzmann, Dirk

    2014-01-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists…

  13. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...

  14. Describing of elements IO field in a testing computer program

    Directory of Open Access Journals (Sweden)

    Igor V. Loshkov

    2017-01-01

    Full Text Available A standard of describing the process of displaying interactive windows on a computer monitor, through which an output of questions and input of answers are implemented during computer testing, is presented in the article [11]. According to the proposed standard, the description of the process mentioned above is performed with a format line, containing element names, their parameters as well as grouping and auxiliary symbols. Program objects are described using elements of standard. The majority of objects create input and output windows on a computer monitor. The aim of our research was to develop a minimum possible set of elements of standard to perform mathematical and computer science testing.The choice of elements of the standard was conducted in parallel with the development and testing of the program that uses them. This approach made it possible to choose a sufficiently complete set of elements for testing in fields of study mentioned above. For the proposed elements, names were selected in such a way: firstly, they indicate their function and secondly, they coincide with the names of elements in other programming languages that are similar by function. Parameters, their names, their assignments and accepted values are proposed for the elements. The principle of name selection for the parameters was the same as for elements of the standard: the names should correspond to their assignments or coincide with names of similar parameters in other programming languages. The parameters define properties of objects. Particularly, while the elements of standard create windows, the parameters define object properties (location, size, appearance and the sequence in which windows are created. All elements of standard, proposed in this article are composed in a table, the columns of which have names and functions of these elements. Inside the table, the elements of standard are grouped row by row into four sets: input elements, output elements, input

  15. Teachers of Advertising Media Courses Describe Techniques, Show Computer Applications.

    Science.gov (United States)

    Lancaster, Kent M.; Martin, Thomas C.

    1989-01-01

    Reports on a survey of university advertising media teachers regarding textbooks and instructional aids used, teaching techniques, computer applications, student placement, instructor background, and faculty publishing. (SR)

  16. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the response of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for reliable

  17. Statistical models describing the energy signature of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Thavlov, Anders

    2010-01-01

    Approximately one third of the primary energy production in Denmark is used for heating in buildings. Therefore efforts to accurately describe and improve energy performance of the building mass are very important. For this purpose statistical models describing the energy signature of a building, i...... or varying energy prices. The paper will give an overview of statistical methods and applied models based on experiments carried out in FlexHouse, which is an experimental building in SYSLAB, Risø DTU. The models are of different complexity and can provide estimates of physical quantities such as UA......-values, time constants of the building, and other parameters related to the heat dynamics. A method for selecting the most appropriate model for a given building is outlined and finally a perspective of the applications is given. Aknowledgements to the Danish Energy Saving Trust and the Interreg IV ``Vind i...

  18. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  19. Icosahedral symmetry described by an incommensurately modulated crystal structure model

    DEFF Research Database (Denmark)

    Wolny, Janusz; Lebech, Bente

    1986-01-01

    A crystal structure model of an incommensurately modulated structure is presented. Although six different reciprocal vectors are used to describe the model, all calculations are done in three dimensions making calculation of the real-space structure trivial. Using this model, it is shown that both...... the positions of the bragg reflections and information about the relative intensities of these reflections are in full accordance with the diffraction patterns reported for microcrystals of the rapidly quenched Al86Mn14 alloy. It is also shown that at least the local structure possesses full icosahedral...

  20. A Model Describing Stable Coherent Synchrotron Radiation in Storage Rings

    International Nuclear Information System (INIS)

    Sannibale, F.

    2004-01-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSY II storage ring. We also use this model to optimize the performance of a source for stable CSR emission

  1. A model describing stable coherent synchrotron radiation in storage rings

    International Nuclear Information System (INIS)

    Sannibale, F.; Byrd, J.M.; Loftsdottir, A.; Venturini, M.; Abo-Bakr, M.; Feikes, J.; Holldack, K.; Kuske, P.; Wuestefeld, G.; Huebers, H.-W.; Warnock, R.

    2004-01-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSY II storage ring. We also use this model to optimize the performance of a source for stable CSR emission

  2. A Kinetic Model Describing Injury-Burden in Team Sports.

    Science.gov (United States)

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  3. A new settling velocity model to describe secondary sedimentation

    DEFF Research Database (Denmark)

    Ramin, Elham; Wágner, Dorottya Sarolta; Yde, Lars

    2014-01-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids...... distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges...... associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM...

  4. A new settling velocity model to describe secondary sedimentation.

    Science.gov (United States)

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  6. Using the MWC model to describe heterotropic interactions in hemoglobin

    Science.gov (United States)

    Rapp, Olga

    2017-01-01

    Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329

  7. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  8. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  9. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  10. A general modeling framework for describing spatially structured population dynamics

    Science.gov (United States)

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance

  11. Can the "standard" unitarized Regge models describe the TOTEM data?

    CERN Document Server

    Alkin, A; Martynov, E

    2013-01-01

    The standard Regge poles are considered as inputs for two unitarization methods: eikonal and U-matrix. It is shown that only models with three input pomerons and two input odderons can describe the high energy data on $pp$ and $\\bar pp$ elastic scattering including the new data from Tevatron and LHC. However, it seems that the both considered models require a further modification (e.g. nonlinear reggeon trajectories and/or nonexponential vertex functions) for a more satisfactory description of the data at 19.0 GeV$\\leq \\sqrt{s}\\leq$ 7 TeV and 0.01 $\\leq |t|\\leq $14.2 GeV$^{2}$.

  12. Experimental investigation of statistical models describing distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)

  13. Communication skills training: describing a new conceptual model.

    Science.gov (United States)

    Brown, Richard F; Bylund, Carma L

    2008-01-01

    Current research in communication in physician-patient consultations is multidisciplinary and multimethodological. As this research has progressed, a considerable body of evidence on the best practices in physician-patient communication has been amassed. This evidence provides a foundation for communication skills training (CST) at all levels of medical education. Although the CST literature has demonstrated that communication skills can be taught, one critique of this literature is that it is not always clear which skills are being taught and whether those skills are matched with those being assessed. The Memorial Sloan-Kettering Cancer Center Comskil Model for CST seeks to answer those critiques by explicitly defining the important components of a consultation, based on Goals, Plans, and Actions theories and sociolinguistic theory. Sequenced guidelines as a mechanism for teaching about particular communication challenges are adapted from these other methods. The authors propose that consultation communication can be guided by an overarching goal, which is achieved through the use of a set of predetermined strategies. Strategies are common in CST; however, strategies often contain embedded communication skills. These skills can exist across strategies, and the Comskil Model seeks to make them explicit in these contexts. Separate from the skills are process tasks and cognitive appraisals that need to be addressed in teaching. The authors also describe how assessment practices foster concordance between skills taught and those assessed through careful coding of trainees' communication encounters and direct feedback.

  14. INCAS: an analytical model to describe displacement cascades

    Energy Technology Data Exchange (ETDEWEB)

    Jumel, Stephanie E-mail: stephanie.jumel@edf.fr; Claude Van-Duysen, Jean E-mail: jean-claude.van-duysen@edf.fr

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  15. INCAS: an analytical model to describe displacement cascades

    Science.gov (United States)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  16. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  17. A model based on soil structural aspects describing the fate of genetically modified bacteria in soil

    NARCIS (Netherlands)

    Hoeven, van der N.; Elsas, van J.D.; Heijnen, C.E.

    1996-01-01

    A computer simulation model was developed which describes growth and competition of bacteria in the soil environment. In the model, soil was assumed to contain millions of pores of a few different size classes. An introduced bacterial strain, e.g. a genetically modified micro-organism (GEMMO), was

  18. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    Science.gov (United States)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  19. Using multistage models to describe radiation-induced leukaemia

    International Nuclear Information System (INIS)

    Little, M.P.; Muirhead, C.R.; Boice, J.D. Jr.; Kleinerman, R.A.

    1995-01-01

    The Armitage-Doll model of carcinogenesis is fitted to data on leukaemia mortality among the Japanese atomic bomb survivors with the DS86 dosimetry and on leukaemia incidence in the International Radiation Study of Cervical Cancer patients. Two different forms of model are fitted: the first postulates up to two radiation-affected stages and the second additionally allows for the presence at birth of a non-trivial population of cells which have already accumulated the first of the mutations leading to malignancy. Among models of the first form, a model with two adjacent radiation-affected stages appears to fit the data better than other models of the first form, including both models with two affected stages in any order and models with only one affected stage. The best fitting model predicts a linear-quadratic dose-response and reductions of relative risk with increasing time after exposure and age at exposure, in agreement with what has previously been observed in the Japanese and cervical cancer data. However, on the whole it does not provide an adequate fit to either dataset. The second form of model appears to provide a rather better fit, but the optimal models have biologically implausible parameters (the number of initiated cells at birth is negative) so that this model must also be regarded as providing an unsatisfactory description of the data. (author)

  20. Early Childhood Teacher Preparation: A Tale of Authors and Multimedia, A Model of Technology Integration Described.

    Science.gov (United States)

    Wetzel, Keith; McLean, S. V.

    1997-01-01

    Describes collaboration of two teacher educators, one in early childhood language arts and one in computers in education. Discusses advantages and disadvantages and extensions of this model, including how a college-wide survey revealed that students in teamed courses are better prepared to teach and learn with technology. (DR)

  1. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  2. LCM 3.0: A Language for describing Conceptual Models

    NARCIS (Netherlands)

    Feenstra, Remco; Wieringa, Roelf J.

    1993-01-01

    The syntax of the conceptual model specification language LCM is defined. LCM uses equational logic to specify data types and order-sorted dynamic logic to specify objects with identity and mutable state. LCM specifies database transactions as finite sets of atomic object transitions.

  3. A dynamic data based model describing nephropathia epidemica in Belgium

    NARCIS (Netherlands)

    Amirpour Haredasht, S.; Barrios, J.M.; Maes, P.; Verstraeten, W.W.; Clement, J.; Ducoffre, G.; Lagrou, K.; Van Ranst, M.; Coppin, P.; Berckmans, D.; Aerts, J.M.F.G.

    2011-01-01

    ropathia epidemica (NE) is a human infection caused by Puumala virus (PUUV), which is naturally carried and shed by bank voles (Myodes glareolus). Population dynamics and infectious diseases in general, such as NE, have often been modelled with mechanistic SIR (Susceptible, Infective and Remove with

  4. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  5. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  6. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  7. Internal anatomy of the hornbill casque described by radiography, contrast radiography, and computed tomography.

    Science.gov (United States)

    Gamble, Kathryn C

    2007-03-01

    Hornbills are distinguished from most other avian taxa by the presence of a casque on the dorsal maxillary beak, which, in all but 1 of the 54 extant hornbill species, is described as essentially an air-filled cavity enclosed by minimal cancellous bone. The external casque has been described in detail, but little has been described about its internal anatomy and the communications between the casque and the paranasal sinuses. In this study, 10 intact casque and skull specimens of 7 hornbill species were collected opportunistically at necropsy. The anatomy of the casque and the skull for each of the specimens was examined by radiography, contrast radiography, and computed tomography. After imaging, 8 specimens were submitted for osteologic preparation to directly visualize the casque and the skull interior. Through this standardized review, the baseline anatomy of the internal casque was described, including identification of a novel casque sinus within the paranasal sinus system. These observations will assist clinicians in the diagnosis and treatment of diseases of the casque in hornbill species.

  8. Computational methods for describing the laser-induced mechanical response of tissue

    Energy Technology Data Exchange (ETDEWEB)

    Trucano, T.; McGlaun, J.M.; Farnsworth, A.

    1994-02-01

    Detailed computational modeling of laser surgery requires treatment of the photoablation of human tissue by high intensity pulses of laser light and the subsequent thermomechanical response of the tissue. Three distinct physical regimes must be considered to accomplish this: (1) the immediate absorption of the laser pulse by the tissue and following tissue ablation, which is dependent upon tissue light absorption characteristics; (2) the near field thermal and mechanical response of the tissue to this laser pulse, and (3) the potential far field (and longer time) mechanical response of witness tissue. Both (2) and (3) are dependent upon accurate constitutive descriptions of the tissue. We will briefly review tissue absorptivity and mechanical behavior, with an emphasis on dynamic loads characteristic of the photoablation process. In this paper our focus will center on the requirements of numerical modeling and the uncertainties of mechanical tissue behavior under photoablation. We will also discuss potential contributions that computational simulations can make in the design of surgical protocols which utilize lasers, for example, in assessing the potential for collateral mechanical damage by laser pulses.

  9. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  10. SAS3DC - A computer program to describe accidents in LMFBRs

    International Nuclear Information System (INIS)

    Angerer, G.; Arnecke, G.; Polch, A.

    1981-02-01

    The code system SAS3D - developed in the ANL - is at present the most adequate instrument for simulating accidents in the LMFBRs. SAS3DC is an improved version of this code system: the routine CLAZAS - modelling in SAS3D the motion of the fuel cladding - is replaced in the SAS3DC by the routine CMOT. CMOT describes the moving material not in the Lagrangian - as CLAZAS - but in the Eulerian system and is so able to register even small cladding-displacements. To complete the description of the SAS3DC-code the results of some sample problems are included. (orig.) [de

  11. A model to describe potential effects of chemotherapy on critical radiobiological treatments

    International Nuclear Information System (INIS)

    Rodríguez-Pérez, D.; Desco, M.M.; Antoranz, J.C.

    2016-01-01

    Although chemo- and radiotherapy can annihilate tumors on their own. they are also used in coadjuvancy: improving local effects of radiotherapy using chemotherapy as a radiosensit.izer. The effects of radiotherapy are well described by current radiobiological models. The goal of this work is to describe a discrete radiotherapy model, that has been previously used describe high radiation dose response as well as unusual radio-responses of some types of tumors (e.g. prostate cancer), to obtain a model of chemo+radiotherapy that can describe how the outcome of their combination is a more efficient removal of the tumor. Our hypothesis is that, although both treatments haven different mechanisms, both affect similar key points of cell metabolism and regulation, that lead to cellular death. Hence, we will consider a discrete model where chemotherapy may affect a fraction of the same targets destroyed by radiotherapy. Although radiotherapy reaches all cells equally, chemotherapy diffuses through a tumor attaining lower concentration in its center and higher in its surface. With our simulations we study the enhanced effect of combined therapy treatment and how it depends on the tissue critical parameters (the parameters of the lion-extensive radiobiological model), the number of “targets” aimed at by chemotherapy, and the concentration and diffusion rate of the drug inside the tumor. The results show that an equivalent, cliemo-radio-dose can be computed that allows the prediction of the lower radiation dose that causes the same effect than a radio-only treatment. (paper)

  12. A model to describe potential effects of chemotherapy on critical radiobiological treatments

    Science.gov (United States)

    Rodríguez-Pérez, D.; Desco, M. M.; Antoranz, J. C.

    2016-08-01

    Although chemo- and radiotherapy can annihilate tumors on their own. they are also used in coadjuvancy: improving local effects of radiotherapy using chemotherapy as a radiosensit.izer. The effects of radiotherapy are well described by current radiobiological models. The goal of this work is to describe a discrete radiotherapy model, that has been previously used describe high radiation dose response as well as unusual radio-responses of some types of tumors (e.g. prostate cancer), to obtain a model of chemo+radiotherapy that can describe how the outcome of their combination is a more efficient removal of the tumor. Our hypothesis is that, although both treatments haven different mechanisms, both affect similar key points of cell metabolism and regulation, that lead to cellular death. Hence, we will consider a discrete model where chemotherapy may affect a fraction of the same targets destroyed by radiotherapy. Although radiotherapy reaches all cells equally, chemotherapy diffuses through a tumor attaining lower concentration in its center and higher in its surface. With our simulations we study the enhanced effect of combined therapy treatment and how it depends on the tissue critical parameters (the parameters of the lion-extensive radiobiological model), the number of “targets” aimed at by chemotherapy, and the concentration and diffusion rate of the drug inside the tumor. The results show that an equivalent, cliemo-radio-dose can be computed that allows the prediction of the lower radiation dose that causes the same effect than a radio-only treatment.

  13. A method for describing the doses delivered by transmission x-ray computed tomography

    International Nuclear Information System (INIS)

    Shope, T.B.; Gagne, R.M.; Johnson, G.C.

    1981-01-01

    A method for describing the absorbed dose delivered by x-ray transmission computed tomography (CT) is proposed which provides a means to characterize the dose resulting from CT procedures consisting of a series of adjacent scans. The dose descriptor chosen is the average dose at several locations in the imaged volume of the central scan of the series. It is shown that this average dose, as defined, for locations in the central scan of the series can be obtained from the integral of the dose profile perpendicular to the scan plane at these same locations for a single scan. This method for estimating the average dose from a CT procedure has been evaluated as a function of the number of scans in the multiple scan procedure and location in the dosimetry phantom using single scan dose profiles obtained from five different types of CT systems. For the higher dose regions in the phantoms, the multiple scan dose descriptor derived from the single scan dose profiles overestimates the multiple scan average dose by no more than 10%, provided the procedure consists of at least eight scans

  14. Strategy Ranges: Describing Change in Prospective Elementary Teachers' Approaches to Mental Computation of Sums and Differences

    Science.gov (United States)

    Whitacre, Ian

    2015-01-01

    This study investigated the sets of mental computation strategies used by prospective elementary teachers to compute sums and differences of whole numbers. In the context of an intervention designed to improve the number sense of prospective elementary teachers, participants were interviewed pre/post, and their mental computation strategies were…

  15. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  16. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  17. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  18. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  19. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  20. Validation of mathematical models to describe fluid dynamics of a cold riser by gamma ray attenuation

    International Nuclear Information System (INIS)

    Melo, Ana Cristina Bezerra Azedo de

    2004-12-01

    The fluid dynamic behavior of a riser in a cold type FCC model was investigated by means of catalyst concentration distribution measured with gamma attenuation and simulated with a mathematical model. In the riser of the cold model, MEF, 0,032 m in diameter, 2,30 m in length the fluidized bed, whose components are air and FCC catalyst, circulates. The MEF is operated by automatic control and instruments for measuring fluid dynamic variables. An axial catalyst concentration distribution was measured using an Am-241 gamma source and a NaI detector coupled to a multichannel provided with a software for data acquisition and evaluation. The MEF was adapted for a fluid dynamic model validation which describes the flow in the riser, for example, by introducing an injector for controlling the solid flow in circulation. Mathematical models were selected from literature, analyzed and tested to simulate the fluid dynamic of the riser. A methodology for validating fluid dynamic models was studied and implemented. The stages of the work were developed according to the validation methodology, such as data planning experiments, study of the equations which describe the fluidodynamic, computational solvers application and comparison with experimental data. Operational sequences were carried out keeping the MEF conditions for measuring catalyst concentration and simultaneously measuring the fluid dynamic variables, velocity of the components and pressure drop in the riser. Following this, simulated and experimental values were compared and statistical data treatment done, aiming at the required precision to validate the fluid dynamic model. The comparison tests between experimental and simulated data were carried out under validation criteria. The fluid dynamic behavior of the riser was analyzed and the results and the agreement with literature were discussed. The adopt model was validated under the MEF operational conditions, for a 3 to 6 m/s gas velocity in the riser and a slip

  1. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  2. Application and validation of predictive computer programs describing the chemistry of radionuclides in the geosphere

    International Nuclear Information System (INIS)

    Waters, M.; Duffield, J.R.; Griffiths, P.J.F.; Williams, D.R.

    1991-01-01

    Chemval is an international project concerned with improving the data used to model the speciation chemistry of radionuclide migration from underground waste disposal sites. Chemval has two main aims: to produce a reliable database of thermodynamic equilibrium constants for use in such chemical modelling; to perform a series of test-case modelling exercises based upon real site and field data to verify and validate the existing tools used for simulating the chemical speciation and the transport of radionuclides in the environment

  3. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  4. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  5. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  6. Development of computational methods to describe the mechanical behavior of PWR fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Wanninger, Andreas; Seidl, Marcus; Macian-Juan, Rafael [Technische Univ. Muenchen, Garching (Germany). Dept. of Nuclear Engineering

    2016-10-15

    To investigate the static mechanical response of PWR fuel assemblies (FAs) in the reactor core, a structural FA model is being developed using the FEM code ANSYS Mechanical. To assess the capabilities of the model, lateral deflection tests are performed for a reference FA. For this purpose we distinguish between two environments, in-laboratory and in-reactor for different burn-ups. The results are in qualitative agreement with experimental tests and show the stiffness decrease of the FAs during irradiation in the reactor core.

  7. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  8. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...

  9. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  10. A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach

    Directory of Open Access Journals (Sweden)

    Dai Owaki

    2017-06-01

    Full Text Available Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns.

  11. Fast fission phenomenon, deep inelastic reactions and compound nucleus formation described within a dynamical macroscopic model

    International Nuclear Information System (INIS)

    Gregoire, C.; Ngo, C.; Remaud, B.

    1982-01-01

    We present a dynamical model to describe dissipative heavy ion reactions. It treats explicitly the relative motion of the two ions, the mass asymmetry of the system and the projection of the isospin of each ion. The deformations, which are induced during the collision, are simulated with a time-dependent interaction potential. This is done by a time-dependent transition between a sudden interaction potential in the entrance channel and an adiabatic potential in the exit channel. The model allows us to compute the compound-nucleus cross section and multidifferential cross-sections for deep inelastic reactions. In addition, for some systems, and under certain conditions which are discussed in detail, a new dissipative heavy ion collision appears: fast-fission phenomenon which has intermediate properties between deep inelastic and compound nucleus reactions. The calculated properties concerning fast fission are compared with experimental results and reproduce some of those which could not be understood as belonging to deep inelastic or compound-nucleus reactions. (orig.)

  12. Review of reactive kinetic models describing reductive dechlorination of chlorinated ethenes in soil and groundwater

    DEFF Research Database (Denmark)

    Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup; Scheutz, Charlotte

    2013-01-01

    Reductive dechlorination is a major degradation pathway of chlorinated ethenes in anaerobic subsurface environments, and reactive kinetic models describing the degradation process are needed in fate and transport models of these contaminants. However, reductive dechlorination is a complex biologi...

  13. Describing to compute

    Directory of Open Access Journals (Sweden)

    Adriana Rossi

    2012-06-01

    Full Text Available In this paper the advantages due to the possibility of generating complex surfaces starting from bi-dimensional geometries by means of CAD softwares are discussed. Two case studies are presented to show the hypothetical variation of three primary choice cycles. The study of basic geometries (a, paths where the geometries are swept along (b, places occupied by the sections lofting the paths (c. The strong innovation contained in the continuity of invention process is deeply appreciated. This is especially true when that process is not the result of habit and finds its roots in the principles and in the criteria of geometry. Nothing is left to improvisation in this discipline: every concept is based on mathematical calculus.

  14. A model describing water and salt migration in concrete during wetting/drying cycles

    NARCIS (Netherlands)

    Arends, T.; Taher, A.; van der Zanden, A.J.J.; Brouwers, H.J.H.; Bilek, V.; Kersner, Z.

    2014-01-01

    In order to predict the life span of concrete structures, models describing the migration of chloride are needed. In this paper, a start is made with a simple, theoretical model describing water and chloride transport in a concrete sample. First, transport of water in concrete is considered with

  15. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  16. Endorsement of Models Describing Sexual Response of Men and Women with a Sexual Partner

    DEFF Research Database (Denmark)

    Giraldi, Annamaria; Kristensen, Ellids; Sand, Michael

    2015-01-01

    , erectile dysfunction and dissatisfaction with sexual life were significantly related to endorsement of the Basson model or none of the models (P = 0.01). CONCLUSIONS: No single model of sexual response could describe men's and women's sexual responses. The majority of men and women with no sexual......INTRODUCTION: Several models have been used to describe men's and women's sexual responses. These models have been conceptualized as linear or circular models. The circular models were proposed to describe women's sexual function best. AIM: This study aims to determine whether men and women thought...... that current theoretical models of sexual responses accurately reflected their own sexual experience and to what extent this was influenced by sexual dysfunction. METHODS: A cross-sectional study of a large, broadly sampled, nonclinical population, cohort of Danish men and women. The Female Sexual Function...

  17. Cosmic logic: a computational model

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps

  18. Review Of Applied Mathematical Models For Describing The Behaviour Of Aqueous Humor In Eye Structures

    Science.gov (United States)

    Dzierka, M.; Jurczak, P.

    2015-12-01

    In the paper, currently used methods for modeling the flow of the aqueous humor through eye structures are presented. Then a computational model based on rheological models of Newtonian and non-Newtonian fluids is proposed. The proposed model may be used for modeling the flow of the aqueous humor through the trabecular meshwork. The trabecular meshwork is modeled as an array of rectilinear parallel capillary tubes. The flow of Newtonian and non-Newtonian fluids is considered. As a results of discussion mathematical equations of permeability of porous media and velocity of fluid flow through porous media have been received.

  19. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  20. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    Science.gov (United States)

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This

  1. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jantsje H. Pasma

    2018-03-01

    Full Text Available The Independent Channel (IC model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a

  2. Ecohydrology in Mediterranean areas: a numerical model to describe growing seasons out of phase with precipitations

    Directory of Open Access Journals (Sweden)

    D. Pumo

    2008-02-01

    Full Text Available The probabilistic description of soil moisture dynamics is a relatively new topic in hydrology. The most common ecohydrological models start from a stochastic differential equation describing the soil water balance, where the unknown quantity, the soil moisture, depends both on spaces and time. Most of the solutions existing in literature are obtained in a probabilistic framework and under steady-state condition; even if this last condition allows the analytical handling of the problem, it has considerably simplified the same problem by subtracting generalities from it.

    The steady-state hypothesis, appears perfectly applicable in arid and semiarid climatic areas like those of African's or middle American's savannas, but it seems to be no more valid in areas with Mediterranean climate, where, notoriously, the wet season foregoes the growing season, recharging water into the soil. This moisture stored at the beginning of the growing season (known as soil moisture initial condition has a great importance, especially for deep-rooted vegetation, by enabling survival in absence of rainfalls during the growing season and, however, keeping the water stress low during the first period of the same season.

    The aim of this paper is to analyze the soil moisture dynamics using a simple non-steady numerical ecohydrological model. The numerical model here proposed is able to reproduce soil moisture probability density function, obtained analytically in previous studies for different climates and soils in steady-state conditions; consequently it can be used to compute both the soil moisture time-profile and the vegetation static water stress time-profile in non-steady conditions.

    Here the differences between the steady-analytical and the non-steady numerical probability density functions are analyzed, showing how the proposed numerical model is able to capture the effects of winter recharge on the soil moisture. The dynamic

  3. RFQ modeling computer program

    International Nuclear Information System (INIS)

    Potter, J.M.

    1985-01-01

    The mathematical background for a multiport-network-solving program is described. A method for accurately numerically modeling an arbitrary, continuous, multiport transmission line is discussed. A modification to the transmission-line equations to accommodate multiple rf drives is presented. An improved model for the radio-frequency quadrupole (RFQ) accelerator that corrects previous errors is given. This model permits treating the RFQ as a true eight-port network for simplicity in interpreting the field distribution and ensures that all modes propagate at the same velocity in the high-frequency limit. The flexibility of the multiport model is illustrated by simple modifications to otherwise two-dimensional systems that permit modeling them as linear chains of multiport networks

  4. A comparison of macroscopic models describing the collective response of sedimenting rod-like particles in shear flows

    KAUST Repository

    Helzel, Christiane; Tzavaras, Athanasios

    2016-01-01

    We consider a kinetic model, which describes the sedimentation of rod-like particles in dilute suspensions under the influence of gravity, presented in Helzel and Tzavaras (submitted for publication). Here we restrict our considerations to shear flow and consider a simplified situation, where the particle orientation is restricted to the plane spanned by the direction of shear and the direction of gravity. For this simplified kinetic model we carry out a linear stability analysis and we derive two different nonlinear macroscopic models which describe the formation of clusters of higher particle density. One of these macroscopic models is based on a diffusive scaling, the other one is based on a so-called quasi-dynamic approximation. Numerical computations, which compare the predictions of the macroscopic models with the kinetic model, complete our presentation.

  5. A comparison of macroscopic models describing the collective response of sedimenting rod-like particles in shear flows

    KAUST Repository

    Helzel, Christiane

    2016-07-22

    We consider a kinetic model, which describes the sedimentation of rod-like particles in dilute suspensions under the influence of gravity, presented in Helzel and Tzavaras (submitted for publication). Here we restrict our considerations to shear flow and consider a simplified situation, where the particle orientation is restricted to the plane spanned by the direction of shear and the direction of gravity. For this simplified kinetic model we carry out a linear stability analysis and we derive two different nonlinear macroscopic models which describe the formation of clusters of higher particle density. One of these macroscopic models is based on a diffusive scaling, the other one is based on a so-called quasi-dynamic approximation. Numerical computations, which compare the predictions of the macroscopic models with the kinetic model, complete our presentation.

  6. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  7. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  8. A revised multi-Fickian moisture transport model to describe non-Fickian effects in wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund; Damkilde, Lars; Svensson, Staffan

    2007-01-01

    This paper presents a study and a refinement of the sorption rate model in a so-called multi-Fickian or multi-phase model. This type of model describes the complex moisture transport system in wood, which consists of separate water vapor and bound-water diffusion interacting through sorption...... sorption allow a simplification of the system to be modeled by a single Fickian diffusion equation. To determine the response of the system, the sorption rate model is essential. Here the function modeling the moisture-dependent adsorption rate is investigated based on existing experiments on thin wood...

  9. CLIC Detector Concepts as described in the CDR: Differences between the GEANT4 and Engineering Models

    CERN Document Server

    Elsener, K; Schlatter, D; Siegrist, N

    2011-01-01

    The CLIC_ILD and CLIC_SiD detector concepts as used for the CDR Vol. 2 in 2011 exist both in GEANT4 simulation models and in engineering layout drawings. At this early stage of a conceptual design, there are inevitably differences between these models, which are described in this note.

  10. Numerical model describing the heat transfer between combustion products and ventilation-system duct walls

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    A package of physical models simulating the heat transfer processes occurring between combustion gases and ducts in ventilation systems is described. The purpose of the numerical model is to predict how the combustion gas in a system heats up or cools down as it flows through the ducts in a ventilation system under fire conditions. The model treats a duct with (forced convection) combustion gases flowing on the inside and stagnant ambient air on the outside. The model is composed of five submodels of heat transfer processes along with a numerical solution procedure to evaluate them. Each of these quantities is evaluated independently using standard correlations based on experimental data. The details of the physical assumptions, simplifications, and ranges of applicability of the correlations are described. A typical application of this model to a full-scale fire test is discussed, and model predictions are compared with selected experimental data

  11. A standard protocol for describing individual-based and agent-based models

    Science.gov (United States)

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  12. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption.

    Science.gov (United States)

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan

    2016-03-01

    Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.

  13. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  14. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  15. Verification and validation of predictive computer programs describing the near and far-field chemistry of radioactive waste disposal systems

    International Nuclear Information System (INIS)

    Read, D.; Broyd, T.W.

    1988-01-01

    This paper provides an introduction to CHEMVAL, an international project concerned with establishing the applicability of chemical speciation and coupled transport models to the simulation of realistic waste disposal situations. The project aims to validate computer-based models quantitatively by comparison with laboratory and field experiments. Verification of the various computer programs employed by research organisations within the European Community is ensured through close inter-laboratory collaboration. The compilation and review of thermodynamic data forms an essential aspect of this work and has led to the production of an internally consistent standard CHEMVAL database. The sensitivity of results to variation in fundamental constants is being monitored at each stage of the project and, where feasible, complementary laboratory studies are used to improve the data set. Currently, thirteen organisations from five countries are participating in CHEMVAL which forms part of the Commission of European Communities' MIRAGE 2 programme of research. (orig.)

  16. A consilience model to describe N2O production during biological N removal

    DEFF Research Database (Denmark)

    Domingo Felez, Carlos; Smets, Barth F.

    2016-01-01

    Nitrous oxide (N2O), a potent greenhouse gas, is produced during biological nitrogen conversion in wastewater treatment operations. Complex mechanisms underlie N2O production by autotrophic and heterotrophic organisms, which continue to be unravelled. Mathematical models that describe nitric oxide...... (NO) and N2O dynamics have been proposed. Here, a first comprehensive model that considers all relevant NO and N2O production and consumption mechanisms is proposed. The model describes autotrophic NO production by ammonia oxidizing bacteria associated with ammonia oxidation and with nitrite reduction......, followed by NO reduction to N2O. It also considers NO and N2O as intermediates in heterotrophic denitrification in a 4-step model. Three biological NO and N2O production pathways are accounted for, improving the capabilities of existing models while not increasing their complexity. Abiotic contributions...

  17. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  18. Describing the processes of propagation and eliminating wildfires with the use of agent models

    Directory of Open Access Journals (Sweden)

    G. A. Dorrer

    2017-10-01

    Full Text Available A new method of describing the processes of propagation and elimination of wildfires on the basis of agent-based modeling is proposed. The main structural units of the creation of such models are the classes of active objects (agents. Agent approach, combined with Geographic Information Systems (GIS can effectively describe the interaction of a large number of participants in the process to combat wildfires: fire spreading, fire crews, mechanization, aerial means and other. In this paper we propose a multi-agent model to predict the spread of wildfire edge and simulate the direct method of extinguishing a ground fire with non-mechanized crews. The model consist with two classes of agents, designated A and B. The burning fire edge is represented as a chain of A-agents, each of which simulates the burning of an elementary portion of vegetation fuel. Fire front movement (moving the A-agent described by the Hamilton-Jacobi equation with using the indicatrises of normal front rate of spread (figurotris. The configuration of the front calculated on basis the algorithm of mobile grids. Agents other type, B-agents, described extinguishing process; they move to the agents of A type and act on them, reducing the combustion intensity to zero. Modeling system presented as two-level coloured nested Petri Net, which describes the agents’ interaction semantics. This model is implemented as a GIS-oriented software system that can be useful both in the fire fighting management as well as in staff training tactics to fighting wildfires. Some examples of modeling decision making on а ground fire extinguishing are presented.

  19. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  20. A two component model describing nucleon structure functions in the low-x region

    Energy Technology Data Exchange (ETDEWEB)

    Bugaev, E.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 7a, 60th October Anniversary prospect, Moscow 117312 (Russian Federation); Mangazeev, B.V. [Irkutsk State University, 1, Karl Marx Street, Irkutsk 664003 (Russian Federation)

    2009-12-15

    A two component model describing the electromagnetic nucleon structure functions in the low-x region, based on generalized vector dominance and color dipole approaches is briefly described. The model operates with the mesons of rho-family having the mass spectrum of the form m{sub n}{sup 2}=m{sub r}ho{sup 2}(1+2n) and takes into account the nondiagonal transitions in meson-nucleon scattering. The special cut-off factors are introduced in the model, to exclude the gamma-qq-bar-V transitions in the case of narrow qq-bar-pairs. For the color dipole part of the model the well known FKS-parameterization is used.

  1. A STRUCTURAL MODEL DESCRIBE CHINESE TRADESMEN ATTITUDES TOWARDS GREEK STUDENTS CONSUMPTION BEHAVIOR

    Directory of Open Access Journals (Sweden)

    Sofia D. ANASTASIADOU

    2012-12-01

    Full Text Available This study tests evaluates 43 Chinese tradesmen opinios describe the main factors that influnce Greek consumers’ behavior. A structural model was constructed to represent the relationship between consumer components. The model was tested for its Convergent and Discriminant Validity. Moreover it was tested for its reliability and construct reliability. The findings from this study may be used by Chinese tradesmen to develop their marketing campains and customers.

  2. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    Science.gov (United States)

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  3. Robustness of a cross contamination model describing transfer of pathogens during grinding of meat

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Sant’Ana, A. S.; Hansen, Solvej Katrine Holm

    2016-01-01

    This study aimed to evaluate a cross contamination model for its capability of describing transfer of Salmonella spp. and L. monocytogenes during grinding of varying sizes and numbers of pieces of meats in two grinder systems. Data from 19 trials were collected. Three evaluation approaches were...

  4. Robustness of a cross contamination model describing transfer of pathogens during grinding of meat

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Sant’Ana, A. S.; Hansen, Solvej Katrine Holm

    2016-01-01

    This study aimed to evaluate a cross contamination model for its capability of describing transfer of Salmonella spp. and L. monocytogenes during grinding of varying sizes and numbers of pieces of meats in two grinder systems. Data from 19 trials were collected. Three evaluation approaches were a...... that grinding was influenced by sharpness of grinder knife, specific grinder and grinding temperature....

  5. Development of a Pharmacokinetic Model to Describe the Complex Pharmacokinetics of Pazopanib in Cancer Patients

    NARCIS (Netherlands)

    Yu, Huixin; van Erp, Nielka; Bins, Sander; Mathijssen, Ron H J; Schellens, Jan H M; Beijnen, Jos H.; Steeghs, Neeltje; Huitema, Alwin D R

    Background and Objective: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. Methods: Pharmacokinetic data were available from 96

  6. Development of a Pharmacokinetic Model to Describe the Complex Pharmacokinetics of Pazopanib in Cancer Patients

    NARCIS (Netherlands)

    Yu, H.; Erp, N. van; Bins, S.; Mathijssen, R.H.; Schellens, J.H.; Beijnen, J.H.; Steeghs, N.; Huitema, A.D.

    2017-01-01

    BACKGROUND AND OBJECTIVE: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. METHODS: Pharmacokinetic data were available from 96

  7. Structural Models Describing Placebo Treatment Effects in Schizophrenia and Other Neuropsychiatric Disorders

    NARCIS (Netherlands)

    Reddy, Venkatesh Pilla; Kozielska, Magdalena; Johnson, Martin; Vermeulen, An; de Greef, Rik; Liu, Jing; Groothuis, Geny M. M.; Danhof, Meindert; Proost, Johannes H.

    2011-01-01

    Large variation in placebo response within and among clinical trials can substantially affect conclusions about the efficacy of new medications in psychiatry. Developing a robust placebo model to describe the placebo response is important to facilitate quantification of drug effects, and eventually

  8. Describing Growth Pattern of Bali Cows Using Non-linear Regression Models

    Directory of Open Access Journals (Sweden)

    Mohd. Hafiz A.W

    2016-12-01

    Full Text Available The objective of this study was to evaluate the best fit non-linear regression model to describe the growth pattern of Bali cows. Estimates of asymptotic mature weight, rate of maturing and constant of integration were derived from Brody, von Bertalanffy, Gompertz and Logistic models which were fitted to cross-sectional data of body weight taken from 74 Bali cows raised in MARDI Research Station Muadzam Shah Pahang. Coefficient of determination (R2 and residual mean squares (MSE were used to determine the best fit model in describing the growth pattern of Bali cows. Von Bertalanffy model was the best model among the four growth functions evaluated to determine the mature weight of Bali cattle as shown by the highest R2 and lowest MSE values (0.973 and 601.9, respectively, followed by Gompertz (0.972 and 621.2, respectively, Logistic (0.971 and 648.4, respectively and Brody (0.932 and 660.5, respectively models. The correlation between rate of maturing and mature weight was found to be negative in the range of -0.170 to -0.929 for all models, indicating that animals of heavier mature weight had lower rate of maturing. The use of non-linear model could summarize the weight-age relationship into several biologically interpreted parameters compared to the entire lifespan weight-age data points that are difficult and time consuming to interpret.

  9. Comparison of six different models describing survival of mammalian cells after irradiation

    International Nuclear Information System (INIS)

    Sontag, W.

    1990-01-01

    Six different cell-survival models have been compared. All models are based on the similar assumption that irradiated cells are able to exist in one of three states. S A is the state of a totally repaired cell, in state S C the cell contains lethal lesions and in state S b the cell contains potentially lethal lesions i.e. those which either can be repaired or converted into lethal lesions. The differences between the six models lie in the different mathematical relationships between the three states. To test the six models, six different sets of experimental data were used which describe cell survival at different repair times after irradiation with sparsely ionizing irradiation. In order to compare the models, a goodness-of-fit function was used. The differences between the six models were tested by use of the nonparametric Mann-Whitney two sample test. Based on the 95% confidence limit, this required separation into three groups. (orig.)

  10. Rapid-relocation model for describing high-fluence retention of rare gases implanted in solids

    Science.gov (United States)

    Wittmaack, K.

    2009-09-01

    to be due to bombardment induced relocation and reemission, only the remaining 10% (or less) can be attributed to sputter erosion. The relocation efficiency is interpreted as the 'speed' of radiation enhanced diffusion towards the surface. The directionality of diffusion is attributed to the gradient of the defect density on the large-depth side of the damage distribution where most of the implanted rare gas atoms come to rest. Based on SRIM calculations, two representative parameters are defined, the peak number of lattice displacements, Nd,m, and the spacing, △ zr,d, between the peaks of the range and the damage distributions. Support in favour of rapid rare gas relocation by radiation enhanced diffusion is provided by the finding that the relocation efficiencies for Ar and Xe, which vary by up to one order of magnitude, scale as Ψ=kN/Δz, independent to the implantation energy (10-80 keV Ar, 10-500 keV Xe), within an error margin of only ± 15%. The parameter k contains the properties of the implanted rare gas atoms. A recently described computer simulation model, which assumed that the pressure established by the implanted gas drives reemission, is shown to reproduce measured Xe profiles quite well, but only at that energy at which the fitting parameter of the model was determined (140 keV). Using the same parameter at other energies, deviations by up to a factor of four are observed.

  11. Is coronene better described by Clar's aromatic π-sextet model or by the AdNDP representation?

    Science.gov (United States)

    Kumar, Anand; Duran, Miquel; Solà, Miquel

    2017-07-05

    The bonding patterns in coronene are complicated and controversial as denoted by the lack of consensus of how its electronic structure should be described. Among the different proposed descriptions, the two most representative are those generated by Clar's aromatic π-sextet and adaptative natural density partitioning (AdNDP) models. Quantum-chemical calculations at the density functional theory level are performed to evaluate the model that gives a better representation of coronene. To this end, we analyse the molecular structure of coronene, we estimate the aromaticity of its inner and outer rings using various local aromaticity descriptors, and we assess its chemical reactivity from the study of the Diels-Alder reaction with cyclopentadiene. Results obtained are compared with those computed for naphthalene and phenanthrene. Our conclusion is that Clar's π-sextet model provides the representation of coronene that better describes the physicochemical behavior of this molecule. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Comparison of two mathematical models for describing heat-induced cell killing

    International Nuclear Information System (INIS)

    Roti Roti, J.L.; Henle, K.J.

    1980-01-01

    A computer-based minimization algorithm is utilized to obtain the optimum fits of two models to hyperthermic cell killing data. The models chosen are the multitarget, single-hit equation, which is in general use, and the linear-quadratic equation, which has been applied to cell killing by ionizing irradiation but not to heat-induced cell killing. The linear-quadratic equation fits hyperthermic cell killing data as well as the multitarget, single-hit equation. Both parameters of the linear-quadratic equation obey the Arrhenius law, whereas only one of the two parameters of the multitarget, single-hit equation obeys the Arrhenius law. Thus the linear-quadratic function can completely define cell killing as a function of both time and temperature. In addition, the linear-quadratic model will provide a simplified approach to the study of the synergism between heat and X irradiation

  13. New Model to describe the interaction of slow neutrons with solid deuterium

    International Nuclear Information System (INIS)

    Granada, J.R

    2009-01-01

    A new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice s density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement. [es

  14. Regularization of a massless dirac model to describe anomalous electromagnetic response of Weyl semimetals

    International Nuclear Information System (INIS)

    Takane, Yoshitake

    2016-01-01

    An unbounded massless Dirac model with two nondegenerate Dirac cones is the simplest model for Weyl semimetals, which show the anomalous electromagnetic response of chiral magnetic effect (CME) and anomalous Hall effect (AHE). However, if this model is naively used to analyze the electromagnetic response within a linear response theory, it gives the result apparently inconsistent with the persuasive prediction based on a lattice model. We show that this serious difficulty is related to the breaking of current conservation in the Dirac model due to quantum anomaly and can be removed if current and charge operators are redefined to include the contribution from the anomaly. We demonstrate that the CME as well as the AHE can be properly described using newly defined operators, and clarify that the CME is determined by the competition between the contribution from the anomaly and that from low-energy electrons. (author)

  15. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    Science.gov (United States)

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Using concept maps to describe undergraduate students’ mental model in microbiology course

    Science.gov (United States)

    Hamdiyati, Y.; Sudargo, F.; Redjeki, S.; Fitriani, A.

    2018-05-01

    The purpose of this research was to describe students’ mental model in a mental model based-microbiology course using concept map as assessment tool. Respondents were 5th semester of undergraduate students of Biology Education of Universitas Pendidikan Indonesia. The mental modelling instrument used was concept maps. Data were taken on Bacteria sub subject. A concept map rubric was subsequently developed with a maximum score of 4. Quantitative data was converted into a qualitative one to determine mental model level, namely: emergent = score 1, transitional = score 2, close to extended = score 3, and extended = score 4. The results showed that mental model level on bacteria sub subject before the implementation of mental model based-microbiology course was at the transitional level. After implementation of mental model based-microbiology course, mental model was at transitional level, close to extended, and extended. This indicated an increase in the level of students’ mental model after the implementation of mental model based-microbiology course using concept map as assessment tool.

  17. Describing the clinical reasoning process: application of a model of enablement to a pediatric case.

    Science.gov (United States)

    Furze, Jennifer; Nelson, Kelly; O'Hare, Megan; Ortner, Amanda; Threlkeld, A Joseph; Jensen, Gail M

    2013-04-01

    Clinical reasoning is a core tenet of physical therapy practice leading to optimal patient care. The purpose of this case was to describe the outcomes, subjective experience, and reflective clinical reasoning process for a child with cerebral palsy using the International Classification of Functioning, Disability, and Health (ICF) model. Application of the ICF framework to a 9-year-old boy with spastic triplegic cerebral palsy was utilized to capture the interwoven factors present in this case. Interventions in the pool occurred twice weekly for 1 h over a 10-week period. Immediately post and 4 months post-intervention, the child made functional and meaningful gains. The family unit also developed an enjoyment of exercising together. Each individual family member described psychological, emotional, or physical health improvements. Reflection using the ICF model as a framework to discuss clinical reasoning can highlight important factors contributing to effective patient management.

  18. Development of zircaloy deformation model to describe the zircaloy-4 cladding tube during accidents

    International Nuclear Information System (INIS)

    Raff, S.

    1978-01-01

    The development of a high-temperature deformation model for Zircaloy-4 cans is primarily based on numerous well-parametrized tensile tests to get the material behaviour including statistical variance. It is shown that plastic deformation may be described by a power creep law, the coefficients of which show strong dependence on temperature in the relevant temperature region. These coefficients have been determined. A model based on these coefficients has been established which, apart from best estimate deformation, gives upper and lower bounds of possible deformation. The model derived from isothermal uniaxial tests is being verified against isothermal and transient tube burst tests. The influence of preoxidation and increased oxygen concentration during deformation is modeled on the basis of the pseudobinary Zircaloy-oxygen phase diagram. (author)

  19. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist

    2013-01-01

    BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...

  20. An empirical model describing the postnatal growth of organs in ICRP reference humans: Pt. 1

    International Nuclear Information System (INIS)

    Walker, J.T.

    1991-01-01

    An empirical model is presented for describing the postnatal mass growth of lungs in ICRP reference humans. A combined exponential and logistic function containing six parameters is fitted to ICRP 23 lung data using a weighted non-linear least squares technique. The results indicate that the model delineates the data well. Further analysis shows that reference male lungs attain a higher pubertal peak velocity (PPV) and adult mass size than female lungs, although the latter reach their PPV and adult mass size first. Furthermore, the model shows that lung growth rates in infants are two to three orders of magnitude higher than those in mature adults. This finding is important because of the possible association between higher radiation risks in infants' organs that have faster cell turnover rates compared to mature adult organs. The significance of the model for ICRP dosimetric purposes will be discussed. (author)

  1. A relativistic gauge model describing N particles bound by harmonic forces

    International Nuclear Information System (INIS)

    Filippov, A.T.

    1987-01-01

    Application of the principle of gauging to linear canonical symmetries of simplest/rudimentary/bilinear lagrangians is shown to produce a relativistic version of the Lagrangian describing N particles bound by harmonic forces. For pairwise coupled identical particles the gauge group is T 1 xU 1 , xSU N-1 . A model for the relativistic discrete string (a chain of N particles) is also discussed. All these gauge theoried of particles can be quantized by standard methods

  2. Composite model describing the excitation and de-excitation of nitrogen by an electron beam

    International Nuclear Information System (INIS)

    Kassem, A.E.; Hickman, R.S.

    1975-01-01

    Based on recent studies, the effect of re-excited ions in the emission of electron beam induced fluorescence in nitrogen has been estimated. These effects are included in the formulation of a composite model describing the excitation and de-excitation of nitrogen by an electron beam. The shortcomings of previous models, namely the dependence of the measured temperature on true gas temperature as well as the gas density, are almost completely eliminated in the range of temperatures and densities covered by the available data. (auth)

  3. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    Science.gov (United States)

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  4. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  5. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  6. A metallic solution model with adjustable parameter for describing ternary thermodynamic properties from its binary constituents

    International Nuclear Information System (INIS)

    Fang Zheng; Qiu Guanzhou

    2007-01-01

    A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature

  7. A model describing intra-granular fission gas behaviour in oxide fuel for advanced engineering tools

    Science.gov (United States)

    Pizzocri, D.; Pastore, G.; Barani, T.; Magni, A.; Luzzi, L.; Van Uffelen, P.; Pitts, S. A.; Alfonsi, A.; Hales, J. D.

    2018-04-01

    The description of intra-granular fission gas behaviour is a fundamental part of any model for the prediction of fission gas release and swelling in nuclear fuel. In this work we present a model describing the evolution of intra-granular fission gas bubbles in terms of bubble number density and average size, coupled to gas release to grain boundaries. The model considers the fundamental processes of single gas atom diffusion, gas bubble nucleation, re-solution and gas atom trapping at bubbles. The model is derived from a detailed cluster dynamics formulation, yet it consists of only three differential equations in its final form; hence, it can be efficiently applied in engineering fuel performance codes while retaining a physical basis. We discuss improvements relative to previous single-size models for intra-granular bubble evolution. We validate the model against experimental data, both in terms of bubble number density and average bubble radius. Lastly, we perform an uncertainty and sensitivity analysis by propagating the uncertainties in the parameters to model results.

  8. Feasibility study for a plasticity model to describe the transient thermomechanical behavior of Zircaloy. Final report

    International Nuclear Information System (INIS)

    Valanis, K.C.

    1979-11-01

    The conceptual framework of the endochronic theory is described and a summary of its capabilities, as well as past and potential applications to the mechanical response of metals to general histories of deformation, temperature, and radiation is given. The purely mechanical part of the theory is developed on the basis of the concept of intrinsic time which serves to incorporate in a unified and concise fashion the effects of strain history and strain rate on the stress response. The effects of temperature are introduced by means of the theory of deformation kinetics through its relation to the internal variable theory of irreversible thermodynamics. As a result, physically sound formulae are developed which account for the effect of temperature history on the stress response. An approach to describing irradiation effects is briefly discussed. More research would be needed to define appropriate constitutive representations for Zircaloy. The endochronic theory is also looked at from a numerical analysis viewpoint of future applications to problems of practical interest. In appendix B a first cut attempt has been made to assess the computational efficiencies of material constitutive equation approaches

  9. A bottom-up model to describe consumers’ preferences towards late season peaches

    Energy Technology Data Exchange (ETDEWEB)

    Groot, E.; Albisu, L.M.

    2015-07-01

    Peaches are consumed in Mediterranean countries since ancient times. Nowadays there are few areas in Europe that produce peaches with Protected Designation of Origin (PDO), and the Calanda area is one of them. The aim of this work is to describe consumers’ preferences towards late season PDO Calanda peaches in the city of Zaragoza, Spain, by a bottom-up model. The bottom-up model proves greater amount of information than top-down models. In this approach it is estimated one utility function per consumer. Thus, it is not necessary to make assumptions about preference distributions and correlations across respondents. It was observed that preference distributions were neither normal nor independently distributed. If those preferences were estimated by top-down models, conclusions would be biased. This paper also explores a new way to describe preferences through individual utility functions. Results show that the largest behavioural group gathered origin sensitive consumers. Their utility increased if the peaches were produced in the Calanda area and, especially, when peaches had the PDO Calanda brand. In sequence, the second most valuable attribute for consumers was the price. Peach size and packaging were not so important on purchase choice decision. Nevertheless, it is advisable to avoid trading smallest size peaches (weighting around 160 g/fruit). Traders also have to be careful by using active packaging. It was found that a group of consumers disliked this kind of product, probably, because they perceived it as less natural. (Author)

  10. An extended car-following model to describe connected traffic dynamics under cyberattacks

    Science.gov (United States)

    Wang, Pengcheng; Yu, Guizhen; Wu, Xinkai; Qin, Hongmao; Wang, Yunpeng

    2018-04-01

    In this paper, the impacts of the potential cyberattacks on vehicles are modeled through an extended car-following model. To better understand the mechanism of traffic disturbance under cyberattacks, the linear and nonlinear stability analysis are conducted respectively. Particularly, linear stability analysis is performed to obtain different neutral stability conditions with various parameters; and nonlinear stability analysis is carried out by using reductive perturbation method to derive the soliton solution of the modified Korteweg de Vries equation (mKdV) near the critical point, which is used to draw coexisting stability lines. Furthermore, by applying linear and nonlinear stability analysis, traffic flow state can be divided into three states, i.e., stable, metastable and unstable states which are useful to describe shockwave dynamics and driving behaviors under cyberattacks. The theoretical results show that the proposed car-following model is capable of successfully describing the car-following behavior of connected vehicles with cyberattacks. Finally, numerical simulation using real values has confirmed the validity of theoretical analysis. The results further demonstrate our model can be used to help avoid collisions and relieve traffic congestion with cybersecurity threats.

  11. Development of a model describing virus removal process in an activated sludge basin

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.; Shiragami, N. Unno, H. [Tokyo Institute of Technology, Tokyo (Japan)

    1995-06-20

    The virus removal process from the liquid phase in an activated sludge basin possibly consists of physicochemical processes, such as adsorption onto sludge flocs, biological processes such as microbial predating and inactivation by virucidal components excreted by microbes. To describe properly the virus behavior in an activated sludge basin, a simple model is proposed based on the experimental data obtained using a poliovirus type 1. A three-compartments model, which include the virus in the liquid phase and in the peripheral and inner regions of sludge flocs is employed. By using the model, the Virus removal process was successfully simulated to highlight the implication of its distribution in the activated sludge basin. 17 refs., 8 figs.

  12. Digital Materials - Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Science.gov (United States)

    Mańkowski, J.; Lipnicki, J.

    2017-08-01

    The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker's stability criterion was also examined. Fourteen different materials were analyzed.

  13. Digital Materials – Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Directory of Open Access Journals (Sweden)

    Mańkowski J.

    2017-08-01

    Full Text Available The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker’s stability criterion was also examined. Fourteen different materials were analyzed.

  14. Ordinal regression models to describe tourist satisfaction with Sintra's world heritage

    Science.gov (United States)

    Mouriño, Helena

    2013-10-01

    In Tourism Research, ordinal regression models are becoming a very powerful tool in modelling the relationship between an ordinal response variable and a set of explanatory variables. In August and September 2010, we conducted a pioneering Tourist Survey in Sintra, Portugal. The data were obtained by face-to-face interviews at the entrances of the Palaces and Parks of Sintra. The work developed in this paper focus on two main points: tourists' perception of the entrance fees; overall level of satisfaction with this heritage site. For attaining these goals, ordinal regression models were developed. We concluded that tourist's nationality was the only significant variable to describe the perception of the admission fees. Also, Sintra's image among tourists depends not only on their nationality, but also on previous knowledge about Sintra's World Heritage status.

  15. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  16. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  17. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  18. Development of a geometric uncertainty model describing the accuracy of position-sensitive, coincidence neutron detection

    Energy Technology Data Exchange (ETDEWEB)

    Trivelpiece, Cory L., E-mail: cory@psu.ed [Department of Mechanical and Nuclear Engineering, The Pennsylvania, State University, University Park, PA 16802 (United States); Brenizer, J.S. [Department of Mechanical and Nuclear Engineering, The Pennsylvania, State University, University Park, PA 16802 (United States)

    2011-01-01

    A diameter of uncertainty (D{sub u}) was derived from a geometric uncertainty model describing the error that would be introduced into position-sensitive, coincidence neutron detection measurements by charged-particle transport phenomena and experimental setup. The transport of {alpha} and Li ions, produced by the {sup 10}B(n,{alpha}) {sup 7}Li reaction, through free-standing boro-phosphosilicate glass (BPSG) films was modeled using the Monte Carlo code SRIM, and the results of these simulations were used as input to determine D{sub u} for position-sensitive, coincidence techniques. The results of these calculations showed that D{sub u} is dependent on encoder separation, the angle of charged particle emission, and film thickness. For certain emission scenarios, the magnitude of D{sub u} is larger than the physical size of the neutron converting media that were being modeled. Spheres of uncertainty were developed that describe the difference in flight path times among the bounding-case emission scenarios that were considered in this work. It was shown the overlapping spheres represent emission angles and particle flight path lengths that would be difficult to resolve in terms of particle time-of-flight measurements. However, based on the timing resolution of current nuclear instrumentation, emission events that yield large D{sub u} can be discriminated by logical arguments during spectral deconvolution.

  19. Computer modeling of liquid crystals

    International Nuclear Information System (INIS)

    Al-Barwani, M.S.

    1999-01-01

    In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)

  20. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  1. Development of the model describing highly excited states of odd deformed nuclei

    International Nuclear Information System (INIS)

    Malov, L.A.; Solov'ev, V.G.

    1975-01-01

    An approximate method is given for solving the system of equations obtained earlier for describing the structure of states with intermediate and high energies in the framework of the model taking into account the interaction of quasiparticles with phonons. The new method possesses a number of advantages over the approximate methods of solving the system of equations mentioned. The study is performed for the example of an odd deformed nucleus when several one-quasiparticle components are taken into account at the same time

  2. Critical properties of a ferroelectric superlattice described by a transverse spin-1/2 Ising model

    International Nuclear Information System (INIS)

    Tabyaoui, A; Saber, M; Baerner, K; Ainane, A

    2007-01-01

    The phase transition properties of a ferroelectric superlattice with two alternating layers A and B described by a transverse spin-1/2 Ising model have been investigated using the effective field theory within a probability distribution technique that accounts for the self spin correlation functions. The Curie temperature T c , polarization and susceptibility have been obtained. The effects of the transverse field and the ferroelectric and antiferroelectric interfacial coupling strength between two ferroelectric materials are discussed. They relate to the physical properties of antiferroelectric/ferroelectric superlattices

  3. Model describing the effect of employment of the United States military in a complex emergency.

    Science.gov (United States)

    MacMillan, Donald S

    2005-01-01

    The end of the Cold War vastly altered the worldwide political landscape. With the loss of a main competitor, the United States (US) military has had to adapt its strategic, operational, and tactical doctrines to an ever-increasing variety of non-traditional missions, including humanitarian operations. Complex emergencies (CEs) are defined in this paper from a political and military perspective, various factors that contribute to their development are described, and issues resulting from the employment of US military forces are discussed. A model was developed to illustrate the course of a humanitarian emergency and the potential impact of a military response. The US intervention in Haiti, Northern Iraq, Kosovo, Somalia, Bosnia, and Rwanda serve as examples. A CE develops when there is civil conflict, loss of national governmental authority, a mass population movement, and massive economic failure, each leading to a general decline in food security. The military can alleviate a CE in four ways: (1) provide security for relief efforts; (2) enforce negotiated settlements; (3) provide security for non-combatants; and/or (4) employ logistical capabilities. The model incorporates Norton and Miskel's taxonomy of identifying failing states and helps illustrate the factors that lead to a CE. The model can be used to determine if and when military intervention will have the greatest impact. The model demonstrates that early military intervention and mission assignment within the core competencies of the forces can reverse the course of a CE. Further study will be needed to verify the model.

  4. A flowing plasma model to describe drift waves in a cylindrical helicon discharge

    International Nuclear Information System (INIS)

    Chang, L.; Hole, M. J.; Corr, C. S.

    2011-01-01

    A two-fluid model developed originally to describe wave oscillations in the vacuum arc centrifuge, a cylindrical, rapidly rotating, low temperature, and confined plasma column, is applied to interpret plasma oscillations in a RF generated linear magnetized plasma [WOMBAT (waves on magnetized beams and turbulence)], with similar density and field strength. Compared to typical centrifuge plasmas, WOMBAT plasmas have slower normalized rotation frequency, lower temperature, and lower axial velocity. Despite these differences, the two-fluid model provides a consistent description of the WOMBAT plasma configuration and yields qualitative agreement between measured and predicted wave oscillation frequencies with axial field strength. In addition, the radial profile of the density perturbation predicted by this model is consistent with the data. Parameter scans show that the dispersion curve is sensitive to the axial field strength and the electron temperature, and the dependence of oscillation frequency with electron temperature matches the experiment. These results consolidate earlier claims that the density and floating potential oscillations are a resistive drift mode, driven by the density gradient. To our knowledge, this is the first detailed physics model of flowing plasmas in the diffusion region away from the RF source. Possible extensions to the model, including temperature nonuniformity and magnetic field oscillations, are also discussed.

  5. Application of a mathematical model to describe the effects of chlorpyrifos on Caenorhabditis elegans development.

    Directory of Open Access Journals (Sweden)

    Windy A Boyd

    2009-09-01

    Full Text Available The nematode Caenorhabditis elegans is being assessed as an alternative model organism as part of an interagency effort to develop better means to test potentially toxic substances. As part of this effort, assays that use the COPAS Biosort flow sorting technology to record optical measurements (time of flight (TOF and extinction (EXT of individual nematodes under various chemical exposure conditions are being developed. A mathematical model has been created that uses Biosort data to quantitatively and qualitatively describe C. elegans growth, and link changes in growth rates to biological events. Chlorpyrifos, an organophosphate pesticide known to cause developmental delays and malformations in mammals, was used as a model toxicant to test the applicability of the growth model for in vivo toxicological testing.L1 larval nematodes were exposed to a range of sub-lethal chlorpyrifos concentrations (0-75 microM and measured every 12 h. In the absence of toxicant, C. elegans matured from L1s to gravid adults by 60 h. A mathematical model was used to estimate nematode size distributions at various times. Mathematical modeling of the distributions allowed the number of measured nematodes and log(EXT and log(TOF growth rates to be estimated. The model revealed three distinct growth phases. The points at which estimated growth rates changed (change points were constant across the ten chlorpyrifos concentrations. Concentration response curves with respect to several model-estimated quantities (numbers of measured nematodes, mean log(TOF and log(EXT, growth rates, and time to reach change points showed a significant decrease in C. elegans growth with increasing chlorpyrifos concentration.Effects of chlorpyrifos on C. elegans growth and development were mathematically modeled. Statistical tests confirmed a significant concentration effect on several model endpoints. This confirmed that chlorpyrifos affects C. elegans development in a concentration dependent

  6. A vapour bubble collapse model to describe the fragmentation of low-melting materials

    International Nuclear Information System (INIS)

    Benz, R.; Schober, P.

    1977-11-01

    By means of a model, the fragmentation of a hot melt of metal in consequence of collapsing vapour-bubbles is investigated. In particular the paper deals with the development of the physical model-ideas for calculation of the temperature of contact that adjusts between the temperature of the melt and the coolant, of the waiting-time until bubble-nucleation occurs and of the maximal obtainable vapour-bubble-radius in dependence of the coolant-temperature. After that follows the description of the computing-program belonging to this model and of the results of an extensive parameter-study. The study examined the influence of the temperature of melt and coolant, the melted mass, the nucleation-site-density, the average maximum bubble-radius, the duration of film-breakdown and the coefficient of heat-transition. The calculation of the process of fragmentation turns out to be according to expectation, whereas the duration of this process seems to be somewhat too long. The dependence of the surface-enlargement on the subcooling of the water-bath and the initial temperature of the melt is not yet reproduced satisfactorily by the model. The reasons for this are the temperature-increase of the water-bath as well as the fact that the coupling of heat-flux-density and nucleation-site-density are not taken into consideration. Further improvement of the model is necessary and may improve the results in the sense of the experimental observations. (orig.) [de

  7. Cosmological models described by a mixture of van der Waals fluid and dark energy

    International Nuclear Information System (INIS)

    Kremer, G.M.

    2003-01-01

    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton, (b) an accelerated period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays, (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure, and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid

  8. The solar modulation of galactic comic rays as described by a time-dependent drift model

    International Nuclear Information System (INIS)

    Le Roux, J.A.

    1990-09-01

    The modulation process is understood to be an interaction between cosmic rays and the solar wind. The heliosphere and the observed modulation of cosmic rays in the heliosphere was reviewed and the time-dependence nature of the long-term modulation of cosmic rays highligted. A two-dimensional time-dependent drift model that describes the long-term modulation of cosmic-rays is presented. Application of the time-dependent drift model during times of increased solar activity showed that drift should be reduced during such periods. Isolated Forbush decreases were also studied in an effort to explain some observed trends in the properties of the Forbush decrease as a function of radial distance. The magnitude of the Forbush decrease and its recovery time were therefore studied as a function of radial distance in the equatorial plane. 154 refs., 95 figs., 1 tab

  9. Double porosity model to describe both permeability change and dissolution processes

    International Nuclear Information System (INIS)

    Niibori, Yuichi; Usui, Hideo; Chida, Taiji

    2015-01-01

    Cement is a practical material for constructing the geological disposal system of radioactive wastes. The dynamic behavior of both permeability change and dissolution process caused by a high pH groundwater was explained using a double porosity model assuming that each packed particle consists of the sphere-shaped aggregation of smaller particles. This model assumes two kinds of porosities between the particle clusters and between the particles, where the former porosity change mainly controls the permeability change of the bed, and the latter porosity change controls the diffusion of OH"- ions inducing the dissolution of silica. The fundamental equations consist of a diffusion equation of spherical coordinates of OH"- ions including the first-order reaction term and some equations describing the size changes of both the particles and the particle clusters with time. The change of over-all permeability of the packed bed is evaluated by Kozeny-Carman equation and the calculated radii of particle clusters. The calculated result well describes the experimental result of both permeability change and dissolution processes. (author)

  10. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  11. A mathematical model for describing the mechanical behaviour of root canal instruments.

    Science.gov (United States)

    Zhang, E W; Cheung, G S P; Zheng, Y F

    2011-01-01

    The purpose of this study was to establish a general mathematical model for describing the mechanical behaviour of root canal instruments by combining a theoretical analytical approach with a numerical finite-element method. Mathematical formulas representing the longitudinal (taper, helical angle and pitch) and cross-sectional configurations and area, the bending and torsional inertia, the curvature of the boundary point and the (geometry of) loading condition were derived. Torsional and bending stresses and the resultant deformation were expressed mathematically as a function of these geometric parameters, modulus of elasticity of the material and the applied load. As illustrations, three brands of NiTi endodontic files of different cross-sectional configurations (ProTaper, Hero 642, and Mani NRT) were analysed under pure torsion and pure bending situation by entering the model into a finite-element analysis package (ANSYS). Numerical results confirmed that mathematical models were a feasible method to analyse the mechanical properties and predict the stress and deformation for root canal instruments during root canal preparation. Mathematical and numerical model can be a suitable way to examine mechanical behaviours as a criterion of the instrument design and to predict the stress and strain experienced by the endodontic instruments during root canal preparation. © 2010 International Endodontic Journal.

  12. Cosmological model with viscosity media (dark fluid) described by an effective equation of state

    International Nuclear Information System (INIS)

    Ren Jie; Meng Xinhe

    2006-01-01

    A generally parameterized equation of state (EOS) is investigated in the cosmological evolution with bulk viscosity media modelled as dark fluid, which can be regarded as a unification of dark energy and dark matter. Compared with the case of the perfect fluid, this EOS has possessed four additional parameters, which can be interpreted as the case of the non-perfect fluid with time-dependent viscosity or the model with variable cosmological constant. From this general EOS, a completely integrable dynamical equation to the scale factor is obtained with its solution explicitly given out. (i) In this parameterized model of cosmology, for a special choice of the parameters we can explain the late-time accelerating expansion universe in a new view. The early inflation, the median (relatively late time) deceleration, and the recently cosmic acceleration may be unified in a single equation. (ii) A generalized relation of the Hubble parameter scaling with the redshift is obtained for some cosmology interests. (iii) By using the SNe Ia data to fit the effective viscosity model we show that the case of matter described by p=0 plus with effective viscosity contributions can fit the observational gold data in an acceptable level

  13. Species-free species distribution models describe macroecological properties of protected area networks.

    Science.gov (United States)

    Robinson, Jason L; Fordyce, James A

    2017-01-01

    Among the greatest challenges facing the conservation of plants and animal species in protected areas are threats from a rapidly changing climate. An altered climate creates both challenges and opportunities for improving the management of protected areas in networks. Increasingly, quantitative tools like species distribution modeling are used to assess the performance of protected areas and predict potential responses to changing climates for groups of species, within a predictive framework. At larger geographic domains and scales, protected area network units have spatial geoclimatic properties that can be described in the gap analysis typically used to measure or aggregate the geographic distributions of species (stacked species distribution models, or S-SDM). We extend the use of species distribution modeling techniques in order to model the climate envelope (or "footprint") of individual protected areas within a network of protected areas distributed across the 48 conterminous United States and managed by the US National Park System. In our approach we treat each protected area as the geographic range of a hypothetical endemic species, then use MaxEnt and 5 uncorrelated BioClim variables to model the geographic distribution of the climatic envelope associated with each protected area unit (modeling the geographic area of park units as the range of a species). We describe the individual and aggregated climate envelopes predicted by a large network of 163 protected areas and briefly illustrate how macroecological measures of geodiversity can be derived from our analysis of the landscape ecological context of protected areas. To estimate trajectories of change in the temporal distribution of climatic features within a protected area network, we projected the climate envelopes of protected areas in current conditions onto a dataset of predicted future climatic conditions. Our results suggest that the climate envelopes of some parks may be locally unique or have

  14. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  15. Methodological Bases for Describing Risks of the Enterprise Business Model in Integrated Reporting

    Directory of Open Access Journals (Sweden)

    Nesterenko Oksana O.

    2017-12-01

    Full Text Available The aim of the article is to substantiate the methodological bases for describing the business and accounting risks of an enterprise business model in integrated reporting for their timely detection and assessment, and develop methods for their leveling or minimizing and possible prevention. It is proposed to consider risks in the process of forming integrated reporting from two sides: first, risks that arise in the business model of an organization and should be disclosed in its integrated report; second, accounting risks of integrated reporting, which should be taken into account by members of the cross-sectoral working group and management personnel in the process of forming and promulgating integrated reporting. To develop an adequate accounting and analytical tool for disclosure of information about the risks of the business model and integrated reporting, their leveling or minimization, in the article a terminological analysis of the essence of entrepreneurial and accounting risks is carried out. The entrepreneurial risk is defined as an objective-subjective economic category that characterizes the probability of negative or positive consequences of economic-social-ecological activity within the framework of the business model of an enterprise under uncertainty. The accounting risk is suggested to be understood as the probability of unfavorable consequences as a result of organizational, methodological errors in the integrated accounting system, which present threat to the quality, accuracy and reliability of the reporting information on economic, social and environmental activities in integrated reporting as well as threat of inappropriate decision-making by stakeholders based on the integrated report. For the timely identification of business risks and maximum leveling of the influence of accounting risks on the process of formation and publication of integrated reporting, in the study the place of entrepreneurial and accounting risks in

  16. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  17. Model tool to describe chemical structures in XML format utilizing structural fragments and chemical ontology.

    Science.gov (United States)

    Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran

    2010-05-24

    We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.

  18. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  19. Light reflection models for computer graphics.

    Science.gov (United States)

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  20. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  1. Regression models describing Rosa hybrida response to day/night temperature and photosynthetic photon flux

    International Nuclear Information System (INIS)

    Hopper, D.A.; Hammer, P.A.

    1991-01-01

    A central composite rotatable design was used to estimate quadratic equations describing the relationship of irradiance, as measured by photosynthetic photon flux (PPF), and day (DT) and night (NT) temperatures to the growth and development of Rosa hybrida L. in controlled environments. Plants were subjected to 15 treatment combinations of the PPF, DT, and NT according to the coding of the design matrix. Day and night length were each 12 hours. Environmental factor ranges were chosen to include conditions representative of winter and spring commercial greenhouse production environments in the midwestern United States. After an initial hard pinch, 11 plant growth characteristics were measured every 10 days and at flowering. Four plant characteristics were recorded to describe flower bud development. Response surface equations were displayed as three-dimensional plots, with DT and NT as the base axes and the plant character on the z-axis while PPF was held constant. Response surfaces illustrated the plant response to interactions of DT and NT, while comparisons between plots at different PPF showed the overall effect of PPF. Canonical analysis of all regression models revealed the stationary point and general shape of the response surface. All stationary points of the significant models were located outside the original design space, and all but one surface was a saddle shape. Both the plots and analysis showed greater stem diameter, as well as higher fresh and dry weights of stems, leaves, and flower buds to occur at flowering under combinations of low DT (less than or equal to 17C) and low NT (less than or equal to 14C). However, low DT and NT delayed both visible bud formation and development to flowering. Increased PPF increased overall flower stem quality by increasing stem diameter and the fresh and dry weights of all plant parts at flowering, as well as decreased time until visible bud formation and flowering. These results summarize measured development at

  2. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  3. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  4. A unified model to describe the anisotropic viscoplastic behavior of Zircaloy-4 cladding tubes

    International Nuclear Information System (INIS)

    Delobelle, P.; Robinet, P.; Bouffioux, P.; Geyer, P.; Pichon, I. Le

    1996-01-01

    This paper presents the constitutive equations of a unified viscoplastic model and its validation with experimental data. The mechanical tests were carried out in a temperature range of 20 to 400 C on both cold-worked stress-relieved and fully annealed Zircaloy-4 tubes. Although their geometry (14.3 by 1.2 mm) is different, the crystallographic texture was close to that expected in the cladding tubes. To characterize the anisotropy, mechanical tests were performed under both monotonic and cyclic uni- and bi-directional loadings, i.e., tension-compression, tension-torsion, and tension-internal pressure tests. The results obtained at ambient temperatures and the independence of the ratio R p = var-epsilon θθ p /var-epsilon zz p , with respect to temperature would seem to indicate that the set of anisotropy coefficients does not depend on temperature. Zircaloy-4 material also has a slight supplementary hardening during out-of-phase cyclic loading. The authors propose to extend the formulation of a unified viscoplastic model, developed and identified elsewhere for other initially isotropic materials, to the case of Zircaloy-4. Generally speaking, anisotropy is introduced through fourth order tensors affecting the flow directions, the linear kinematical hardening components, as well as the dynamic and static recoveries of the forementioned hardening variables. The ability of the model to describe all the mechanical properties of the material is shown. The application of the model to simulate mechanical tests (tension, creep, and relaxation) performed on true CWSR Zircaloy-4 cladding tubes with low tin content is also presented

  5. Comparison of three nonlinear models to describe long-term tag shedding by lake trout

    Science.gov (United States)

    Fabrizio, Mary C.; Swanson, Bruce L.; Schram, Stephen T.; Hoff, Michael H.

    1996-01-01

    We estimated long-term tag-shedding rates for lake trout Salvelinus namaycush using two existing models and a model we developed to account for the observed permanence of some tags. Because tag design changed over the course of the study, we examined tag-shedding rates for three types of numbered anchor tags (Floy tags FD-67, FD-67C, and FD-68BC) and an unprinted anchor tag (FD-67F). Lake trout from the Gull Island Shoal region, Lake Superior, were double-tagged, and subsequent recaptures were monitored in annual surveys conducted from 1974 to 1992. We modeled tag-shedding rates, using time at liberty and probabilities of tag shedding estimated from fish released in 1974 and 1978–1983 and later recaptured. Long-term shedding of numbered anchor tags in lake trout was best described by a nonlinear model with two parameters: an instantaneous tag-shedding rate and a constant representing the proportion of tags that were never shed. Although our estimates of annual shedding rates varied with tag type (0.300 for FD-67, 0.441 for FD-67C, and 0.656 for FD-68BC), differences were not significant. About 36% of tags remained permanently affixed to the fish. Of the numbered tags that were shed (about 64%), two mechanisms contributed to tag loss: disintegration and dislodgment. Tags from about 11% of recaptured fish had disintegrated, but most tags were dislodged. Unprinted tags were shed at a significant but low rate immediately after release, but the long-term, annual shedding rate of these tags was only 0.013. Compared with unprinted tags, numbered tags dislodged at higher annual rates; we hypothesized that this was due to the greater frictional drag associated with the larger cross-sectional area of numbered tags.

  6. Yeast for Mathematicians: A Ferment of Discovery and Model Competition to Describe Data.

    Science.gov (United States)

    Lewis, Matthew; Powell, James

    2017-02-01

    In addition to the memorization, algorithmic skills and vocabulary which are the default focus in many mathematics classrooms, professional mathematicians are expected to creatively apply known techniques, construct new mathematical approaches and communicate with and about mathematics. We propose that students can learn these professional, higher-level skills through Laboratory Experiences in Mathematical Biology which put students in the role of mathematics researcher creating mathematics to describe and understand biological data. Here we introduce a laboratory experience centered on yeast (Saccharomyces cerevisiae) growing in a small capped flask with a jar to collect carbon dioxide created during yeast growth and respiration. The lab requires no specialized equipment and can easily be run in the context of a college math class. Students collect data and develop mathematical models to explain the data. To help place instructors in the role of mentor/collaborator (as opposed to jury/judge), we facilitate the lab using model competition judged via Bayesian Information Criterion. This article includes details about the class activity conducted, student examples and pedagogical strategies for success.

  7. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    Science.gov (United States)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  8. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  9. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  10. Pharmacodynamic Model To Describe the Concentration-Dependent Selection of Cefotaxime-Resistant Escherichia coli

    Science.gov (United States)

    Olofsson, Sara K.; Geli, Patricia; Andersson, Dan I.; Cars, Otto

    2005-01-01

    Antibiotic dosing regimens may vary in their capacity to select mutants. Our hypothesis was that selection of a more resistant bacterial subpopulation would increase with the time within a selective window (SW), i.e., when drug concentrations fall between the MICs of two strains. An in vitro kinetic model was used to study the selection of two Escherichia coli strains with different susceptibilities to cefotaxime. The bacterial mixtures were exposed to cefotaxime for 24 h and SWs of 1, 2, 4, 8, and 12 h. A mathematical model was developed that described the selection of preexisting and newborn mutants and the post-MIC effect (PME) as functions of pharmacokinetic parameters. Our main conclusions were as follows: (i) the selection between preexisting mutants increased with the time within the SW; (ii) the emergence and selection of newborn mutants increased with the time within the SW (with a short time, only 4% of the preexisting mutants were replaced by newborn mutants, compared to the longest times, where 100% were replaced); and (iii) PME increased with the area under the concentration-time curve (AUC) and was slightly more pronounced with a long elimination half-life (T1/2) than with a short T1/2 situation, when AUC is fixed. We showed that, in a dynamic competition between strains with different levels of resistance, the appearance of newborn high-level resistant mutants from the parental strains and the PME can strongly affect the outcome of the selection and that pharmacodynamic models can be used to predict the outcome of resistance development. PMID:16304176

  11. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  12. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  13. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Chaos is one of the major scientific discoveries of our times. In fact many scientists ... But there are other natural phenomena that are not predictable though ... characteristics of chaos. ... The position and velocity are all that are needed to determine the motion of a .... a system of equations that modelled the earth's weather ...

  14. The generalized model of polypeptide chain describing the helix-coil transition in biopolymers

    International Nuclear Information System (INIS)

    Mamasakhlisov, E.S.; Badasyan, A.V.; Tsarukyan, A.V.; Grigoryan, A.V.; Morozov, V.F.

    2005-07-01

    In this paper we summarize some results of our theoretical investigations of helix-coil transition both in single-strand (polypeptides) and two-strand (polynucleotides) macromolecules. The Hamiltonian of the Generalized Model of Polypeptide Chain (GMPC) is introduced to describe the system in which the conformations are correlated over some dimensional range Δ (it equals 3 for polypeptide, because one H-bond fixes three pairs of rotation, for double strand DNA it equals to one chain rigidity because of impossibility of loop formation on the scale less than Δ). The Hamiltonian does not contain any parameter designed especially for helix-coil transition and uses pure molecular microscopic parameters (the energy of hydrogen bond formation, reduced partition function of repeated unit, the number of repeated units fixed by one hydrogen bond, the energies of interaction between the repeated units and the solvent molecules). To calculate averages we evaluate the partition function using the transfer-matrix approach. The GMPC allowed to describe the influence of a number of factors, affecting the transition, basing on a unified microscopic approach. Thus we obtained, that solvents change transition temperature and interval in different ways, depending on type of solvent and on energy of solvent- macromolecule interaction; stacking on the background of H-bonding increases stability and decreases cooperativity of melting. For heterogeneous DNA we could analytically derive well known formulae for transition temperature and interval. In the framework of GMPC we calculate and show the difference of two order parameters of helix-coil transition - the helicity degree, and the average fraction of repeated units in helical conformation. Given article has the aim to review the results obtained during twenty years in the context of GMPC. (author)

  15. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  16. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  17. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  18. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  19. The Polyakov, Nambu and Jona-Lasinio model and its applications to describe the sub-nuclear particles

    International Nuclear Information System (INIS)

    Blanquier, E.

    2013-01-01

    To study the high energy nuclear physics and the associated phenomenon, as the quark-gluon plasma / hadronic matter phase transition, the Nambu and Jona-Lasinio model (NJL) appears as an interesting alternative to the Quantum Chromodynamics, not solvable at the considered energies. Indeed, the NJL model allows the description of quarks physics, at finite temperatures and densities. Furthermore, in order to try to correct a limitation of the NJL model, i.e. the absence of confinement, it was proposed a coupling of the quarks/antiquarks to a Polyakov loop, forming the PNJL model. The objective of this thesis is to see the possibilities offered by the NJL and PNJL models, to describe relevant sub-nuclear particles (quarks, mesons, diquarks and baryons), to study their interactions, and to proceed to a dynamical study involving these particles. After a recall of the useful tools, we modeled the u, d, s effective quarks and the mesons. Then, we described the baryons as quarks-diquarks bound states. A part of the work concerned the calculations of the cross-sections associated to the possible reactions implying these particles. Then, we incorporated these results in a computer code, in order to study the cooling of a quarks/antiquarks plasma and its hadronization. In this study, each particle evolves in a system in which the temperature and the densities are local parameters. We have two types of interactions: one due to the collisions, and the other is a remote interaction, notably between quarks. Finally, we studied the properties of our approach: qualities, limitations, and possible evolutions. (author)

  20. Trust models in ubiquitous computing.

    Science.gov (United States)

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  1. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  2. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  3. Conceptual modeling of postmortem evaluation findings to describe dairy cow deaths.

    Science.gov (United States)

    McConnel, C S; Garry, F B; Hill, A E; Lombard, J E; Gould, D H

    2010-01-01

    Dairy cow mortality levels in the United States are excessive and increasing over time. To better define cause and effect and combat rising mortality, clearer definitions of the reasons that cows die need to be acquired through thorough necropsy-based postmortem evaluations. The current study focused on organizing information generated from postmortem evaluations into a monitoring system that is based on the fundamentals of conceptual modeling and that will potentially be translatable into on-farm relational databases. This observational study was conducted on 3 high-producing, commercial dairies in northern Colorado. Throughout the study period a thorough postmortem evaluation was performed by veterinarians on cows that died on each dairy. Postmortem data included necropsy findings, life-history features (e.g., birth date, lactation number, lactational and reproductive status), clinical history and treatments, and pertinent aspects of operational management that were subject to change and considered integral to the poor outcome. During this study, 174 postmortem evaluations were performed. Postmortem evaluation results were conceptually modeled to view each death within the context of the web of factors influencing the dairy and the cow. Categories were formulated describing mortality in terms of functional characteristics potentially amenable to easy performance evaluation, management oversight, and research. In total, 21 death categories with 7 category themes were created. Themes included specific disease processes with variable etiologies, failure of disease recognition or treatment, traumatic events, multifactorial failures linked to transition or negative energy balance issues, problems with feed management, miscellaneous events not amenable to prevention or treatment, and undetermined causes. Although postmortem evaluations provide the relevant information necessary for framing a cow's death, a restructuring of on-farm databases is needed to integrate this

  4. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  5. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  6. Predictions and implications of a poisson process model to describe corrosion of transuranic waste drums

    International Nuclear Information System (INIS)

    Lyon, B.F.; Holmes, J.A.; Wilbert, K.A.

    1995-01-01

    A risk assessment methodology is described in this paper to compare risks associated with immediate or near-term retrieval of transuranic (TRU) waste drums from bermed storage versus delayed retrieval. Assuming a Poisson process adequately describes corrosion, significant breaching of drums is expected to begin at - 15 and 24 yr for pitting and general corrosion, respectively. Because of this breaching, more risk will be incurred by delayed than by immediate retrieval

  7. Development and application of an asymmetric deformation model to describe the fuel rod behaviour during LOCA

    International Nuclear Information System (INIS)

    Chakraborty, A.K.; Schubert, J.D.

    1983-01-01

    For calculation of clad ballooning from single rod and rod bundle experiments a model considering the influences of azimuthal temperature gradients due to the existing eccentricity of the pellets has been developed. This model is based on the secondary creep model of Norton and on the concentric deformation model ending in cladding burst as proposed by F. Erbacher. The new model considers the azimuthal temperature differences along the cladding and the resulting differences in deformations. With this model, calculations of cladding burst deformations from single rod and rod bundle experiments are performed with good agreement

  8. Integrated model of insulin and glucose kinetics describing both hepatic glucose and pancreatic insulin regulation

    DEFF Research Database (Denmark)

    Erlandsen, Mogens; Martinussen, Christoffer; Gravholt, Claus Højbjerg

    2018-01-01

    AbstractBackground and objectives Modeling of glucose kinetics has to a large extent been based on models with plasma insulin as a known forcing function. Furthermore, population-based statistical methods for parameter estimation in these models have mainly addressed random inter-individual varia......AbstractBackground and objectives Modeling of glucose kinetics has to a large extent been based on models with plasma insulin as a known forcing function. Furthermore, population-based statistical methods for parameter estimation in these models have mainly addressed random inter......-individual variations and not intra-individual variations in the parameters. Here we present an integrated whole-body model of glucose and insulin kinetics which extends the well-known two-compartment glucose minimal model. The population-based estimation technique allow for quantification of both random inter......- and intra-individual variation in selected parameters using simultaneous data series on glucose and insulin. Methods We extend the two-compartment glucose model into a whole-body model for both glucose and insulin using a simple model for the pancreas compartment which includes feedback of glucose on both...

  9. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  10. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  11. Selection of heat transfer model for describing short-pulse laser heating silica-based sensor

    International Nuclear Information System (INIS)

    Hao Xiangnan; Nie Jinsong; Li Hua; Bian Jintian

    2012-01-01

    The fundamental equations of Fourier heat transfer model and non-Fourier heat transfer model were numerically solved, with the finite difference method. The relative changes between temperature curves of the two heat transfer models were analyzed under laser irradiation with different pulse widths of 10 ns, 1 ns, 100 ps, 10 ps. The impact of different thermal relaxation time on non-Fourier model results was discussed. For pulses of pulse width less than or equal to 100 ps irradiating silicon material, the surface temperature increases slowly and carrier effect happens, which the non-Fourier model can reflect properly. As for general material, when the pulse width is less than or equal to the thermal relaxation time of material, carrier effect occurs. In this case, the non-Fourier model should be used. (authors)

  12. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  13. Fractional single-phase-lagging heat conduction model for describing anomalous diffusion

    Directory of Open Access Journals (Sweden)

    T.N. Mishra

    2016-03-01

    Full Text Available The fractional single-phase-lagging (FSPL heat conduction model is obtained by combining scalar time fractional conservation equation to the single-phase-lagging (SPL heat conduction model. Based on the FSPL heat conduction model, anomalous diffusion within a finite thin film is investigated. The effect of different parameters on solution has been observed and studied the asymptotic behavior of the FSPL model. The analytical solution is obtained using Laplace transform method. The whole analysis is presented in dimensionless form. Numerical examples of particular interest have been studied and discussed in details.

  14. Steady shear rate rheology of suspensions, as described by the gaint floc model

    NARCIS (Netherlands)

    Stein, H.N.; Laven, J.

    2001-01-01

    The break-down of a particle network by shear is described as the development of shear planes: a region able to withstand low shear stresses may break down under a larger stress; thus with increasing shear stress and shear rate, the mutual distance (A) between successive shear planes decreases

  15. Using a Model to Describe Students' Inductive Reasoning in Problem Solving

    Science.gov (United States)

    Canadas, Maria C.; Castro, Encarnacion; Castro, Enrique

    2009-01-01

    Introduction: We present some aspects of a wider investigation (Canadas, 2007), whose main objective is to describe and characterize inductive reasoning used by Spanish students in years 9 and 10 when they work on problems that involved linear and quadratic sequences. Method: We produced a test composed of six problems with different…

  16. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  17. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  18. Getting computer models to communicate

    International Nuclear Information System (INIS)

    Caremoli, Ch.; Erhard, P.

    1999-01-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  19. Computational Modeling in Liver Surgery

    Directory of Open Access Journals (Sweden)

    Bruno Christ

    2017-11-01

    Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.

  20. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  1. A one-dimensional model to describe flow localization in viscoplastic slender bars subjected to super critical impact velocities

    Science.gov (United States)

    Vaz-Romero, A.; Rodríguez-Martínez, J. A.

    2018-01-01

    In this paper we investigate flow localization in viscoplastic slender bars subjected to dynamic tension. We explore loading rates above the critical impact velocity: the wave initiated in the impacted end by the applied velocity is the trigger for the localization of plastic deformation. The problem has been addressed using two kinds of numerical simulations: (1) one-dimensional finite difference calculations and (2) axisymmetric finite element computations. The latter calculations have been used to validate the capacity of the finite difference model to describe plastic flow localization at high impact velocities. The finite difference model, which highlights due to its simplicity, allows to obtain insights into the role played by the strain rate and temperature sensitivities of the material in the process of dynamic flow localization. Specifically, we have shown that viscosity can stabilize the material behavior to the point of preventing the appearance of the critical impact velocity. This is a key outcome of our investigation, which, to the best of the authors' knowledge, has not been previously reported in the literature.

  2. A simple geometrical model describing shapes of soap films suspended on two rings

    Science.gov (United States)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  3. How well do basic models describe the turbidity currents coming down Monterey and Congo Canyon?

    Science.gov (United States)

    Cartigny, M.; Simmons, S.; Heerema, C.; Xu, J. P.; Azpiroz, M.; Clare, M. A.; Cooper, C.; Gales, J. A.; Maier, K. L.; Parsons, D. R.; Paull, C. K.; Sumner, E. J.; Talling, P.

    2017-12-01

    Turbidity currents rival rivers in their global capacity to transport sediment and organic carbon. Furthermore, turbidity currents break submarine cables that now transport >95% of our global data traffic. Accurate turbidity current models are thus needed to quantify their transport capacity and to predict the forces exerted on seafloor structures. Despite this need, existing numerical models are typically only calibrated with scaled-down laboratory measurements due to the paucity of direct measurements of field-scale turbidity currents. This lack of calibration thus leaves much uncertainty in the validity of existing models. Here we use the most detailed observations of turbidity currents yet acquired to validate one of the most fundamental models proposed for turbidity currents, the modified Chézy model. Direct measurements on which the validation is based come from two sites that feature distinctly different flow modes and grain sizes. The first are from the multi-institution Coordinated Canyon Experiment (CCE) in Monterey Canyon, California. An array of six moorings along the canyon axis captured at least 15 flow events that lasted up to hours. The second is the deep-sea Congo Canyon, where 10 finer grained flows were measured by a single mooring, each lasting several days. Moorings captured depth-resolved velocity and suspended sediment concentration at high resolution (turbidity currents; the modified Chézy model. This basic model has been very useful for river studies over the past 200 years, as it provides a rapid estimate of how flow velocity varies with changes in river level and energy slope. Chézy-type models assume that the gravitational force of the flow equals the friction of the river-bed. Modified Chézy models have been proposed for turbidity currents. However, the absence of detailed measurements of friction and sediment concentration within full-scale turbidity currents has forced modellers to make rough assumptions for these parameters. Here

  4. Transport in semiconductor nanowire superlattices described by coupled quantum mechanical and kinetic models.

    Science.gov (United States)

    Alvaro, M; Bonilla, L L; Carretero, M; Melnik, R V N; Prabhakar, S

    2013-08-21

    In this paper we develop a kinetic model for the analysis of semiconductor superlattices, accounting for quantum effects. The model consists of a Boltzmann-Poisson type system of equations with simplified Bhatnagar-Gross-Krook collisions, obtained from the general time-dependent Schrödinger-Poisson model using Wigner functions. This system for superlattice transport is supplemented by the quantum mechanical part of the model based on the Ben-Daniel-Duke form of the Schrödinger equation for a cylindrical superlattice of finite radius. The resulting energy spectrum is used to characterize the Fermi-Dirac distribution that appears in the Bhatnagar-Gross-Krook collision, thereby coupling the quantum mechanical and kinetic parts of the model. The kinetic model uses the dispersion relation obtained by the generalized Kronig-Penney method, and allows us to estimate radii of quantum wire superlattices that have the same miniband widths as in experiments. It also allows us to determine more accurately the time-dependent characteristics of superlattices, in particular their current density. Results, for several experimentally grown superlattices, are discussed in the context of self-sustained coherent oscillations of the current density which are important in an increasing range of current and potential applications.

  5. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  6. An empirical model to describe performance degradation for warranty abuse detection in portable electronics

    International Nuclear Information System (INIS)

    Oh, Hyunseok; Choi, Seunghyuk; Kim, Keunsu; Youn, Byeng D.; Pecht, Michael

    2015-01-01

    Portable electronics makers have introduced liquid damage indicators (LDIs) into their products to detect warranty abuse caused by water damage. However, under certain conditions, these indicators can exhibit inconsistencies in detecting liquid damage. This study is motivated by the fact that the reliability of LDIs in portable electronics is suspected. In this paper, first, the scheme of life tests is devised for LDIs in conjunction with a robust color classification rule. Second, a degradation model is proposed by considering the two physical mechanisms—(1) phase change from vapor to water and (2) water transport in the porous paper—for LDIs. Finally, the degradation model is validated with additional tests using actual smartphone sets subjected to the thermal cycling of −15 °C to 25 °C and the relative humidity of 95%. By employing the innovative life testing scheme and the novel performance degradation model, it is expected that the performance of LDIs for a particular application can be assessed quickly and accurately. - Highlights: • Devise an efficient scheme of life testing for a warranty abuse detector in portable electronics. • Develop a performance degradation model for the warranty abuse detector used in portable electronics. • Validate the performance degradation model with life tests of actual smartphone sets. • Help make a decision on warranty service in portable electronics manufacturers

  7. A model, describing the influence of water management alternatives on dike stability

    Directory of Open Access Journals (Sweden)

    J. W. M. Lambert

    2015-11-01

    Full Text Available The awareness is rising that economic effects of Land Subsidence are high. Nevertheless, quantifying these economic losses is difficult and, as far as known, not yet done in a sophisticated way. Also, to be able to decide about future strategies, for example to avoid or decrease subsidence, it is necessary to know the financial consequences of measures and possible solutions. As a first step to quantify these economic effects, a MODFLOW-SCR (coupled MODFLOW-Settlements model is coupled with the model DAM. Based on the local stratigraphy, the shape and composition of the existing dike or levee, the level of the surface water and the surface level, macro-stability of the dike is calculated and – if the dike does not meet the required stability – adaptions are proposed. The model enables to separate effects that are caused by sea-level rise and the effects of subsidence. Coupling the DAM model with an economic model to calculate costs of these adaptions is under construction.

  8. Integrated compartmental model for describing the transport of solute in a fractured porous medium. [FRACPORT

    Energy Technology Data Exchange (ETDEWEB)

    DeAngelis, D.L.; Yeh, G.T.; Huff, D.D.

    1984-10-01

    This report documents a model, FRACPORT, that simulates the transport of a solute through a fractured porous matrix. The model should be useful in analyzing the possible transport of radionuclides from shallow-land burial sites in humid environments. The use of the model is restricted to transport through saturated zones. The report first discusses the general modeling approach used, which is based on the Integrated Compartmental Method. The basic equations of solute transport are then presented. The model, which assumes a known water velocity field, solves these equations on two different time scales; one related to rapid transport of solute along fractures and the other related to slower transport through the porous matrix. FRACPORT is validated by application to a simple example of fractured porous medium transport that has previously been analyzed by other methods. Then its utility is demonstrated in analyzing more complex cases of pulses of solute into a fractured matrix. The report serves as a user's guide to FRACPORT. A detailed description of data input, along with a listing of input for a sample problem, is provided. 16 references, 18 figures, 3 tables.

  9. Modified VMD model with correct analytic properties for describing electromagnetic structure of He4 nucleus

    International Nuclear Information System (INIS)

    Dubnicka, S.; Lucan, L.

    1988-12-01

    A new phenomenological model for electromagnetic (e.m.) form factor (ff) of He 4 nucleus is presented, which is based on a modification of the well proved in e.m. interactions of hadrons vector-meson-dominance (VMD) model by means of an incorporation of correct He 4 ff analytic properties, nonzero vector-meson widths and the right power asymptotic behaviour predicted by the quark model. It reproduces the existing experimental information on He 4 e.m. ff in the space-like region quite well. Furthermore, couplings of all well established isoscalar vector mesons with J pc = 1 -- to He 4 nucleus are evaluated as a result of the analysis and the time-like region behaviour of He 4 e.m. ff is predicted. As a consequence of the latter the total cross section of e + e - → He 4 He-bar 4 process is calculated for the first time. (author). 17 refs, 3 figs

  10. Benchmarking of numerical models describing the dispersion of radionuclides in the Arctic Seas

    DEFF Research Database (Denmark)

    Scott, E.M.; Gurbutt, P.; Harms, I.

    1997-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of the International Atomic Energy Agency (IAEA), a working group was created to model the dispersal and transfer of radionuclides released from radioactive waste disposed of in the Kara Sea. The objectives of this group are: (1......) development of realistic and reliable assessment models for the dispersal of radioactive contaminants both within, and from, the Arctic ocean; and (2) evaluation of the contributions of different transfer mechanisms to contaminant dispersal and hence, ultimately, to the risks to human health and environment...

  11. A gauge model describing N relativistic particles bound by linear forces

    International Nuclear Information System (INIS)

    Filippov, A.T.

    1988-01-01

    A relativistic model of N particles bound by linear forces is obtained by applying the gauging procedure to the linear canonical symmteries of a simple (rudimentary) nonrelativistic N-particle Lagrangian extended to relativistic phase space. The new (gauged) Lagrangian is formally Poincare invariant, the Hamiltonian is a linear combination of first-class constraints which are closed with respect to Pisson brackets and generate the localized canonical symmteries. The gauge potentials appear as the Lagrange multipliers of the constraints. Gauge fixing and quantization of the model are also briefly discussed. 11 refs

  12. q-deformed Einstein's model to describe specific heat of solid

    Science.gov (United States)

    Guha, Atanu; Das, Prasanta Kumar

    2018-04-01

    Realistic phenomena can be described more appropriately using generalized canonical ensemble, with proper parameter sets involved. We have generalized the Einstein's theory for specific heat of solid in Tsallis statistics, where the temperature fluctuation is introduced into the theory via the fluctuation parameter q. At low temperature the Einstein's curve of the specific heat in the nonextensive Tsallis scenario exactly lies on the experimental data points. Consequently this q-modified Einstein's curve is found to be overlapping with the one predicted by Debye. Considering only the temperature fluctuation effect(even without considering more than one mode of vibration is being triggered) we found that the CV vs T curve is as good as obtained by considering the different modes of vibration as suggested by Debye. Generalizing the Einstein's theory in Tsallis statistics we found that a unique value of the Einstein temperature θE along with a temperature dependent deformation parameter q(T) , can well describe the phenomena of specific heat of solid i.e. the theory is equivalent to Debye's theory with a temperature dependent θD.

  13. Using Rouse-Fowler model to describe radiation-induced electrical conductivity of nanocomposite materials

    Science.gov (United States)

    Dyuryagina, N. S.; Yalovets, A. P.

    2017-05-01

    Using the Rouse-Fowler (RF) model this work studies the radiation-induced electrical conductivity of a polymer nanocomposite material with spherical nanoparticles against the intensity and exposure time of gamma-ray, concentration and size of nanoparticles. The research has found the energy distribution of localized statesinduced by nanoparticles. The studies were conducted on polymethylmethacrylate (PMMA) with CdS nanoparticles.

  14. Kinetic model describing the UV/H2O2 photodegradation of phenol from water

    Directory of Open Access Journals (Sweden)

    Rubio-Clemente Ainhoa

    2017-01-01

    Full Text Available A kinetic model for phenol transformation through the UV/H2O2 system was developed and validated. The model includes the pollutant decomposition by direct photolysis and HO•, HO2• and O2 •- oxidation. HO• scavenging effects of CO3 2-, HCO3 -, SO4 2- and Cl- were also considered, as well as the pH changes as the process proceeds. Additionally, the detrimental action of the organic matter and reaction intermediates in shielding UV and quenching HO• was incorporated. It was observed that the model can accurately predict phenol abatement using different H2O2/phenol mass ratios (495, 228 and 125, obtaining an optimal H2O2/phenol ratio of 125, leading to a phenol removal higher than 95% after 40 min of treatment, where the main oxidation species was HO•. The developed model could be relevant for calculating the optimal level of H2O2 efficiently degrading the pollutant of interest, allowing saving in costs and time.

  15. CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?

    Energy Technology Data Exchange (ETDEWEB)

    Tajfirouze, E.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of)

    2012-01-10

    Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponent of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.

  16. Exponential law as a more compatible model to describe orbits of planetary systems

    Directory of Open Access Journals (Sweden)

    M Saeedi

    2012-12-01

    Full Text Available   According to the Titus-Bode law, orbits of planets in the solar system obey a geometric progression. Many investigations have been launched to improve this law. In this paper, we apply square and exponential models to planets of solar system, moons of planets, and some extra solar systems, and compare them with each other.

  17. Application of the International Water Association activated sludge models to describe aerobic sludge digestion.

    Science.gov (United States)

    Ghorbani, M; Eskicioglu, C

    2011-12-01

    Batch and semi-continuous flow aerobic digesters were used to stabilize thickened waste-activated sludge at different initial conditions and mean solids retention times. Under dynamic conditions, total suspended solids, volatile suspended solids (VSS) and total and particulate chemical oxygen demand (COD and PCOD) were monitored in the batch reactors and effluent from the semi-continuous flow reactors. Activated Sludge Model (ASM) no. 1 and ASM no. 3 were applied to measured data (calibration data set) to evaluate the consistency and performances of models at different flow regimes for digester COD and VSS modelling. The results indicated that both ASM1 and ASM3 predicted digester COD, VSS and PCOD concentrations well (R2, Ra2 > or = 0.93). Parameter estimation concluded that compared to ASM1, ASM3 parameters were more consistent across different batch and semi-continuous flow runs with different operating conditions. Model validation on a data set independent from the calibration data successfully predicted digester COD (R2 = 0.88) and VSS (R2 = 0.94) concentrations by ASM3, while ASM1 overestimated both reactor COD (R2 = 0.74) and VSS concentrations (R2 = 0.79) after 15 days of aerobic batch digestion.

  18. Predictive model to describe water migration in cellular solid foods during storage

    NARCIS (Netherlands)

    Voogt, J.A.; Hirte, A.; Meinders, M.B.J.

    2011-01-01

    BACKGROUND: Water migration in cellular solid foods during storage causes loss of crispness. To improve crispness retention, physical understanding of this process is needed. Mathematical models are suitable tools to gain this physical knowledge. RESULTS: Water migration in cellular solid foods

  19. Predictive model to describe water migration in cellular solid foods during storage

    NARCIS (Netherlands)

    Voogt, J.A.; Hirte, A.; Meinders, M.B.J.

    2011-01-01

    Background: Water migration in cellular solid foods during storage causes loss of crispness. To improve crispness retention, physical understanding of this process is needed. Mathematical models are suitable tools to gain this physical knowledge. Results: Water migration in cellular solid foods

  20. Atomic-orbital expansion model for describing ion-atom collisions at intermediate and low energies

    International Nuclear Information System (INIS)

    Lin, C.D.; Fritsch, W.

    1983-01-01

    In the description of inelastic processes in ion-atom collisions at moderate energies, the semiclassical close-coupling method is well established as the standard method. Ever since the pioneering work on H + + H in the early 60's, the standard procedure is to expand the electronic wavefunction in terms of molecular orbitals (MO) or atomic orbitals (AO) for describing collisions at, respectively, low or intermediate velocities. It has been recognized since early days that traveling orbitals are needed in the expansions in order to represent the asymptotic states in the collisions correctly. While the adoption of such traveling orbitals presents no conceptual difficulties for expansions using atomic orbitals, the situation for molecular orbitals is less clear. In recent years, various forms of traveling MO's have been proposed, but conflicting results for several well-studied systems have been reported

  1. A physical model describing the interaction of nuclear transport receptors with FG nucleoporin domain assemblies.

    Science.gov (United States)

    Zahn, Raphael; Osmanović, Dino; Ehret, Severin; Araya Callis, Carolina; Frey, Steffen; Stewart, Murray; You, Changjiang; Görlich, Dirk; Hoogenboom, Bart W; Richter, Ralf P

    2016-04-08

    The permeability barrier of nuclear pore complexes (NPCs) controls bulk nucleocytoplasmic exchange. It consists of nucleoporin domains rich in phenylalanine-glycine motifs (FG domains). As a bottom-up nanoscale model for the permeability barrier, we have used planar films produced with three different end-grafted FG domains, and quantitatively analyzed the binding of two different nuclear transport receptors (NTRs), NTF2 and Importin β, together with the concomitant film thickness changes. NTR binding caused only moderate changes in film thickness; the binding isotherms showed negative cooperativity and could all be mapped onto a single master curve. This universal NTR binding behavior - a key element for the transport selectivity of the NPC - was quantitatively reproduced by a physical model that treats FG domains as regular, flexible polymers, and NTRs as spherical colloids with a homogeneous surface, ignoring the detailed arrangement of interaction sites along FG domains and on the NTR surface.

  2. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    Directory of Open Access Journals (Sweden)

    Paul K Newton

    Full Text Available A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold. Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  3. Formulation and integration of constitutive models describing large deformations in thermoplasticity and thermoviscoplasticity

    International Nuclear Information System (INIS)

    Jansohn, W.

    1997-10-01

    This report deals with the formulation and numerical integration of constitutive models in the framework of finite deformation thermomechanics. Based on the concept of dual variables, plasticity and viscoplasticity models exhibiting nonlinear kinematic hardening as well as nonlinear isotropic hardening rules are presented. Care is taken that the evolution equations governing the hardening response fulfill the intrinsic dissipation inequality in every admissible process. In view of the development of an efficient numerical integration procedure, simplified versions of these constitutive models are supposed. In these versions, the thermoelastic strains are assumed to be small and a simplified kinematic hardening rule is considered. Additionally, in view of an implementation into the ABAQUS finite element code, the elasticity law is approximated by a hypoelasticity law. For the simplified onstitutive models, an implicit time-integration algorithm is developed. First, in order to obtain a numerical objective integration scheme, use is made of the HUGHES-WINGET-Algorithm. In the resulting system of ordinary differential equations, it can be distinguished between three differential operators representing different physical effects. The structure of this system of differential equations allows to apply an operator split scheme, which leads to an efficient integration scheme for the constitutive equations. By linearizing the integration algorithm the consistent tangent modulus is derived. In this way, the quadratic convergence of Newton's method used to solve the basic finite element equations (i.e. the finite element discretization of the governing thermomechanical field equations) is preserved. The resulting integration scheme is implemented as a user subroutine UMAT in ABAQUS. The properties of the applied algorithm are first examined by test calculations on a single element under tension-compression-loading. For demonstrating the capabilities of the constitutive theory

  4. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter

    2012-01-01

    A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  5. Phenomenological Model Describing the Formation of Peeling Defects on Hot-Rolled Duplex Stainless Steel 2205

    Science.gov (United States)

    Yong-jun, Zhang; Hui, Zhang; Jing-tao, Han

    2017-05-01

    The chemical composition, morphology, and microstructure of peeling defects formed on the surface of sheets from steel 2205 under hot rolling are studied. The microstructure of the surface is analyzed using scanning electron and light microscopy. The zones affected are shown to contain nonmetallic inclusions of types Al2O3 and CaO - SiO2 - Al2O3 - MgO in the form of streak precipitates and to have an unfavorable content of austenite, which causes decrease in the ductility of the area. The results obtained are used to derive a five-stage phenomenological model of formation of such defects.

  6. Application of a non-equilibrium reaction model for describing horizontal well performance in foamy oil

    Energy Technology Data Exchange (ETDEWEB)

    Luigi, A.; Saputelli, B.; Carlas, M.; Canache, P.; Lopez, E. [DPVS Exploracion y Produccion (Venezuela)

    1998-12-31

    This study was designed to determine the activation energy ranges and frequency factor ranges in chemical reactions in heavy oils of the Orinoco Belt in Venezuela, in order to account for the kinetics of physical changes that occur in the morphology of gas-oil dispersion. A non-equilibrium reaction model was used to model foamy oil behaviour observed at SDZ-182 horizontal well in the Zuata field. Results showed that activation energy for the first reaction ranged from 0 to 0.01 BTU/lb-mol and frequency factor from 0.001 to 1000 l/day. For the second reaction the activation energy was 50x10{sub 3} BTU/lb-mol and the frequency factor 2.75x10{sub 1}2 l/day. The second reaction was highly sensitive to the modifications in activation energy and frequency factor. However, both the activation energy and frequency factor were independent of variations for the first reaction. In the case of the activation energy, the results showed that the high sensitivity of this parameter reflected the impact that temperature has on the representation of foamy oil behaviour. 8 refs., 2 tabs., 6 figs.

  7. SPATIAL MODELLING FOR DESCRIBING SPATIAL VARIABILITY OF SOIL PHYSICAL PROPERTIES IN EASTERN CROATIA

    Directory of Open Access Journals (Sweden)

    Igor Bogunović

    2016-06-01

    Full Text Available The objectives of this study were to characterize the field-scale spatial variability and test several interpolation methods to identify the best spatial predictor of penetration resistance (PR, bulk density (BD and gravimetric water content (GWC in the silty loam soil in Eastern Croatia. The measurements were made on a 25 x 25-m grid which created 40 individual grid cells. Soil properties were measured at the center of the grid cell deep 0-10 cm and 10-20 cm. Results demonstrated that PR and GWC displayed strong spatial dependence at 0-10 cm BD, while there was moderate and weak spatial dependence of PR, BD and GWC at depth of 10-20 cm. Semi-variogram analysis suggests that future sampling intervals for investigated parameters can be increased to 35 m in order to reduce research costs. Additionally, interpolation models recorded similar root mean square values with high predictive accuracy. Results suggest that investigated properties do not have uniform interpolation method implying the need for spatial modelling in the evaluation of these soil properties in Eastern Croatia.

  8. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  9. Describing model of empowering managers by applying structural equation modeling: A case study of universities in Ardabil

    Directory of Open Access Journals (Sweden)

    Maryam Ghahremani Germi

    2015-06-01

    Full Text Available Empowerment is still on the agenda as a management concept and has become a widely used management term in the last decade or so. The purpose of this research was describing model of empowering managers by applying structural equation modeling (SEM at Ardabil universities. Two hundred and twenty managers of Ardabil universities including chancellors, managers, and vice presidents of education, research, and studies participated in this study. Clear and challenging goals, evaluation of function, access to resources, and rewarding were investigated. The results indicated that the designed SEM for empowering managers at university reflects a good fitness level. As it stands out, the conceptual model in the society under investigation was used appropriately. Among variables, access to resources with 88 per cent of load factor was known as the affective variable. Evaluation of function containing 51 per cent of load factor was recognized to have less effect. Results of average rating show that evaluation of function and access to resources with 2.62 coefficients stand at first level. Due to this, they had great impact on managers' empowerment. The results of the analysis provided compelling evidence that model of empowering managers was desirable at Ardabil universities.

  10. Computer models for optimizing radiation therapy

    International Nuclear Information System (INIS)

    Duechting, W.

    1998-01-01

    The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de

  11. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  12. A nonlinear beam model to describe the postbuckling of wide neo-Hookean beams

    Science.gov (United States)

    Lubbers, Luuk A.; van Hecke, Martin; Coulais, Corentin

    2017-09-01

    Wide beams can exhibit subcritical buckling, i.e. the slope of the force-displacement curve can become negative in the postbuckling regime. In this paper, we capture this intriguing behaviour by constructing a 1D nonlinear beam model, where the central ingredient is the nonlinearity in the stress-strain relation of the beams constitutive material. First, we present experimental and numerical evidence of a transition to subcritical buckling for wide neo-Hookean hyperelastic beams, when their width-to-length ratio exceeds a critical value of 12%. Second, we construct an effective 1D energy density by combining the Mindlin-Reissner kinematics with a nonlinearity in the stress-strain relation. Finally, we establish and solve the governing beam equations to analytically determine the slope of the force-displacement curve in the postbuckling regime. We find, without any adjustable parameters, excellent agreement between the 1D theory, experiments and simulations. Our work extends the understanding of the postbuckling of structures made of wide elastic beams and opens up avenues for the reverse-engineering of instabilities in soft and metamaterials.

  13. The modulation of galactic cosmic rays as described by a three-dimensional drift model

    International Nuclear Information System (INIS)

    Potgieter, M.S.

    1984-01-01

    An outline of the present state of knowledge about the effect of drift on the modulation of galactic cosmic rays is given. Various observations related to the reversal of the solar magnetic field polarity are discussed. Comprehensive numerical solutions of the steady-state cosmic-ray transport equation in an axially-symmetric three-dimensional heliosphere, including drift are presented. This is an extention of the continuing effort of the past six years to understand the effect and importance of drift on the transport of galactic cosmic rays in the heliosphere. A flat neutral sheet which coincides with the equatorial plane is assumed. A general method of calculating the drift velocity in the neutral sheet including that used previously by other authors is presented. The effect of changing various modulation parameters on the drift solutions are illustrated in detail. The real significance of drift is illustrated by using Gaussian input spectra on the modulation boundary. A carefully selected set of modulation parameters is used to illustrate to what extent a drift model can explain prominent observational features. It is concluded that drift is important in in the process of cosmic-ray transport and must as such be considered in all modulation studies, but that it is not overwhelmingly dominant as previously anticipated

  14. Multilevel regression models describing regional patterns of invertebrate and algal responses to urbanization across the USA

    Science.gov (United States)

    Cuffney, T.F.; Kashuba, R.; Qian, S.S.; Alameddine, I.; Cha, Y.K.; Lee, B.; Coles, J.F.; McMahon, G.

    2011-01-01

    Multilevel hierarchical regression was used to examine regional patterns in the responses of benthic macroinvertebrates and algae to urbanization across 9 metropolitan areas of the conterminous USA. Linear regressions established that responses (intercepts and slopes) to urbanization of invertebrates and algae varied among metropolitan areas. Multilevel hierarchical regression models were able to explain these differences on the basis of region-scale predictors. Regional differences in the type of land cover (agriculture or forest) being converted to urban and climatic factors (precipitation and air temperature) accounted for the differences in the response of macroinvertebrates to urbanization based on ordination scores, total richness, Ephemeroptera, Plecoptera, Trichoptera richness, and average tolerance. Regional differences in climate and antecedent agriculture also accounted for differences in the responses of salt-tolerant diatoms, but differences in the responses of other diatom metrics (% eutraphenic, % sensitive, and % silt tolerant) were best explained by regional differences in soils (mean % clay soils). The effects of urbanization were most readily detected in regions where forest lands were being converted to urban land because agricultural development significantly degraded assemblages before urbanization and made detection of urban effects difficult. The effects of climatic factors (temperature, precipitation) on background conditions (biogeographic differences) and rates of response to urbanization were most apparent after accounting for the effects of agricultural development. The effects of climate and land cover on responses to urbanization provide strong evidence that monitoring, mitigation, and restoration efforts must be tailored for specific regions and that attainment goals (background conditions) may not be possible in regions with high levels of prior disturbance (e.g., agricultural development). ?? 2011 by The North American

  15. A holistic conceptual framework model to describe medication adherence in and guide interventions in diabetes mellitus.

    Science.gov (United States)

    Jaam, Myriam; Awaisu, Ahmed; Mohamed Ibrahim, Mohamed Izham; Kheir, Nadir

    2018-04-01

    Nonadherence to medications in patients with diabetes, which results in poor treatment outcomes and increased healthcare costs, is commonly reported globally. Factors associated with medication adherence have also been widely studied. However, a clear and comprehensive, disease-specific conceptual framework model that captures all possible factors has not been established. This study aimed to develop a conceptual framework that addresses the complex network of barriers to medication adherence in patients with diabetes. Fourteen databases and grey literature sources were systematically searched for systematic reviews reporting barriers to medication adherence in patients with diabetes. A thematic approach was used to categorize all identified barriers from the reviews and to create a matrix representing the complex network and relations of the different barriers. Eighteen systematic reviews were identified and used for the development of the conceptual framework. Overall, six major themes emerged: patient-, medication-, disease-, provider-, system-, and societal-related factors. Each of these themes was further classified into different sub-categories. It was noted that most interactions were identified to be within the patient-related factors, which not only interact with other themes but also within the same theme. Patient's demographics as well as cultural beliefs were the most notable factors in terms of interactions with other categories and themes. The intricate network and interaction of factors identified between different themes and within individual themes indicate the complexity of the problem of adherence. This framework will potentially enhance the understanding of the complex relation between different barriers for medication adherence in diabetes and will facilitate design of more effective interventions. Future interventions for enhancing medication adherence should look at the overall factors and target multiple themes of barriers to improve patient

  16. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    particle flow, and numerical techniques allowing the modeling of particles suspended in a fluid. The general concept behind each family of techniques is described. Pros and cons for each technique are given along with examples and references to applications to fresh cementitious materials....

  17. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  18. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  19. The deterministic computational modelling of radioactivity

    International Nuclear Information System (INIS)

    Damasceno, Ralf M.; Barros, Ricardo C.

    2009-01-01

    This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models

  20. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  1. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  2. Semimechanistic model describing gastric emptying and glucose absorption in healthy subjects and patients with type 2 diabetes

    DEFF Research Database (Denmark)

    Alskär, Oskar; Bagger, Jonatan I; Røge, Rikke M.

    2016-01-01

    The integrated glucose-insulin (IGI) model is a previously published semimechanistic model that describes plasma glucose and insulin concentrations after glucose challenges. The aim of this work was to use knowledge of physiology to improve the IGI model's description of glucose absorption and ga...... model provides a better description and improves the understanding of dynamic glucose tests involving oral glucose....... and gastric emptying after tests with varying glucose doses. The developed model's performance was compared to empirical models. To develop our model, data from oral and intravenous glucose challenges in patients with type 2 diabetes and healthy control subjects were used together with present knowledge...... glucose absorption was superior to linear absorption regardless of the gastric emptying model applied. The semiphysiological model developed performed better than previously published empirical models and allows better understanding of the mechanisms underlying glucose absorption. In conclusion, our new...

  3. A standardised graphic method for describing data privacy frameworks in primary care research using a flexible zone model.

    NARCIS (Netherlands)

    Kuchinke, W.; Ohmann, C.; Verheij, R.A.; Veen, E.B. van; Arvanitis, T.N.; Taweel, A.; Delaney, B.C.

    2014-01-01

    Purpose: To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data

  4. A Comparison of Models Describing Heat Transfer in the Primary Cooling Zone of a Continuous Casting Machine

    Directory of Open Access Journals (Sweden)

    Miłkowska-Piszczek K.

    2015-04-01

    Full Text Available This paper presents the findings of research conducted concerning the determination of thermal boundary conditions for the steel continuous casting process within the primary cooling zone. A cast slab - with dimensions of 1100 mm×220 mm - was analysed, and models described in references were compared with the authors’ model. The presented models were verified on the basis of an industrial database. The research problem was solved with the finite element method using the ProCAST software package.

  5. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  6. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  7. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  8. BLOW-3A. A theoretical model to describe transient two-phase flow conditions in LMFBR coolant channels

    International Nuclear Information System (INIS)

    Bottoni, M.; Struwe, D.

    1982-12-01

    The computer programme BLOW-3A describes sodium boiling phenomena in subassemblies of fast breeder reactors as well as in in-pile or out-of-pile experiments simulating different failure conditions. This report presents a complete documentation of the code from three main viewpoints: the theoretical foundations of the programme are first described with particular reference to the most recent developments; the structure of the programme is then explained in all details necessary for the user to get a rapid acquaintance with it; eventually several examples of the programme validation are discussed thus enabling the reader to acquire a full picture of the possible applications of the code and at the same time to know its validity range. (orig.) [de

  9. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  10. Computational modeling of a forward lunge

    DEFF Research Database (Denmark)

    Alkjær, Tine; Wieland, Maja Rose; Andersen, Michael Skipper

    2012-01-01

    during forward lunging. Thus, the purpose of the present study was to establish a musculoskeletal model of the forward lunge to computationally investigate the complete mechanical force equilibrium of the tibia during the movement to examine the loading pattern of the cruciate ligaments. A healthy female...... was selected from a group of healthy subjects who all performed a forward lunge on a force platform, targeting a knee flexion angle of 90°. Skin-markers were placed on anatomical landmarks on the subject and the movement was recorded by five video cameras. The three-dimensional kinematic data describing...... the forward lunge movement were extracted and used to develop a biomechanical model of the lunge movement. The model comprised two legs including femur, crus, rigid foot segments and the pelvis. Each leg had 35 independent muscle units, which were recruited according to a minimum fatigue criterion...

  11. Validation of a Simulation Model Describing the Glucose-Insulin-Glucagon Pharmacodynamics in Patients with Type 1 Diabetes

    DEFF Research Database (Denmark)

    Wendt, Sabrina Lyngbye; Ranjan, Ajenthen; Møller, Jan Kloppenborg

    Currently, no consensus exists on a model describing endogenous glucose production (EGP) as a function of glucagon concentrations. Reliable simulations to determine the glucagon dose preventing or treating hypoglycemia or to tune a dual-hormone artificial pancreas control algorithm need a validat...

  12. Biological Effectiveness and Application of Heavy Ions in Radiation Therapy Described by a Physical and Biological Model

    DEFF Research Database (Denmark)

    Olsen, Kjeld J.; Hansen, Johnny W.

    is inadequately described by an RBE-factor, whereas the complete formulation of the probability of survival must be used, as survival depends on both radiation quality and dose. The theoretical model of track structure can be used in dose-effect calculations for neutron-, high-LET, and low-LET radiation applied...... simultaneously in therapy....

  13. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  14. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. A standardised graphic method for describing data privacy frameworks in primary care research using a flexible zone model.

    Science.gov (United States)

    Kuchinke, Wolfgang; Ohmann, Christian; Verheij, Robert A; van Veen, Evert-Ben; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C

    2014-12-01

    To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data privacy frameworks for research done with patient data. Based on an exploration of EU legal requirements for data protection and privacy, data access policies, and existing privacy frameworks of research projects, basic concepts and common processes were extracted, described and incorporated into a model with a formal graphical representation and a standardised notation. The Unified Modelling Language (UML) notation was enriched by workflow and own symbols to enable the representation of extended data flow requirements, data privacy and data security requirements, privacy enhancing techniques (PET) and to allow privacy threat analysis for research scenarios. Our model is built upon the concept of three privacy zones (Care Zone, Non-care Zone and Research Zone) containing databases, data transformation operators, such as data linkers and privacy filters. Using these model components, a risk gradient for moving data from a zone of high risk for patient identification to a zone of low risk can be described. The model was applied to the analysis of data flows in several general clinical research use cases and two research scenarios from the TRANSFoRm project (e.g., finding patients for clinical research and linkage of databases). The model was validated by representing research done with the NIVEL Primary Care Database in the Netherlands. The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy. Copyright © 2014 Elsevier Ireland Ltd. All rights

  16. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  17. KANDY - a numerical model to describe phenomena, which - in a heated and voided fuel element of an LMFBR - may occur

    International Nuclear Information System (INIS)

    Thurnay, K.

    1984-02-01

    Kandy is a model developed to describe the essential destructionphenomena of the fuel elements of an LMFBR. The fuel element is assumed to be a voided one, in which the heat generation is still going on. The main process to be modeled is the melting/bursting/evaporating of parts of the fuel pins and the subsequent dislocation of these materials in the coolant channel. The work presented summarizes the assumptions constituting the model, develops the corresponding equations of motion and describes the procedure, turning these into a system of difference-equations ready for coding. As a final part results of a testcase calculation with the Kandy-code are presentend and interpreted. (orig.) [de

  18. A new model describing the curves for repair of both DNA double-strand breaks and chromosome damage

    International Nuclear Information System (INIS)

    Foray, N.; Badie, C.; Alsbeih, G.; Malaise, E.P.; Fertil, B.

    1996-01-01

    A review of reports dealing with fittings of the data for repair of DNA double-strand breaks (DSBs) and excess chromosome fragments (ECFs) shows that several models are used to fit the repair curves. Since DSBs and ECFs are correleated, it is worth developing a model describing both phenomena. The curve-fitting models used most extensively, the two repair half-times model for DSBs and the monoexponential plus residual model for ECFs, appear to be too inflexible to describe the repair curves for both DSBs and ECFs. We have therefore developed a new concept based on a variable repair half-time. According to this concept, the repair curve is continuously bending and dependent on time and probably reflects a continuous spectrum of damage repairability. The fits of the curves for DSB repair to the variable repair half-time and the variable repair half-time plus residual models were compared to those obtained with the two half-times plus residual and two half-times models. Similarly, the fits of the curves for ECF repair to the variable repair half-time and variable half-time plus residual models were compared to that obtained with the monoexponential plus residual model. The quality of fit and the dependence of adjustable parameters on the portion of the curve fitted were used as comparison criteria. We found that: (a) It is useful to postulate the existence of a residual term for unrepairable lesions, regardless of the model adopted. (b) With the two cell lines tested (a normal and a hypersensitive one), data for both DSBs and ECTs are best fitted to the variable repair half-time plus residual model, whatever the repair time range. 47 refs., 3 figs., 3 tabs

  19. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  20. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  1. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  2. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  3. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  4. Computer modeling of the gyrocon

    International Nuclear Information System (INIS)

    Tallerico, P.J.; Rankin, J.E.

    1979-01-01

    A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency

  5. A New Model for Describing the Rheological Behavior of Heavy and Extra Heavy Crude Oils in the Presence of Nanoparticles

    Directory of Open Access Journals (Sweden)

    Esteban A. Taborda

    2017-12-01

    Full Text Available The present work proposes for the first time a mathematical model for describing the rheological behavior of heavy and extra-heavy crude oils in the presence of nanoparticles. This model results from the combination of two existing mathematical models. The first one applies to the rheology of pseudoplastic substances, i.e., the Herschel-Bulkley model. The second one was previously developed by our research group to model the rheology of suspensions, namely the modified Pal and Rhodes model. The proposed model is applied to heavy and extra heavy crude oils in the presence of nanoparticles, considering the effects of nanoparticles concentration and surface chemical nature, temperature, and crude oil type. All the experimental data evaluated exhibited compelling goodness of fitting, and the physical parameters in the model follow correlate well with variations in viscosity. The new model is dependent of share rate and opens new possibilities for phenomenologically understanding viscosity reduction in heavy crude by adding solid nanoparticles and favoring the scale-up in enhanced oil recovery (EOR and/or improved oil recovery (IOR process.

  6. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  7. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  8. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  11. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  12. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  13. Evaluation and Validation of a TCAT Model to Describe Non-Dilute Flow and Species Transport in Porous Media

    Science.gov (United States)

    Weigand, T. M.; Harrison, E.; Miller, C. T.

    2017-12-01

    A thermodynamically constrained averaging theory (TCAT) model has been developed to simulate non-dilute flow and species transport in porous media. This model has the advantages of a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; the explicit inclusion of dissipative terms that arise from spatial gradients in pressure and chemical activity; and the ability to describe both high and low concentration displacement. The TCAT model has previously been shown to provide excellent agreement for a set of laboratory data and outperformed existing macroscale models that have been used for non-dilute flow and transport. The examined experimental dataset consisted of stable brine displacements for a large range of fluid properties. This dataset however only examined one type of porous media and had a fixed flow rate for all experiments. In this work, the TCAT model is applied to a dataset that consists of two different porous media types, constant head and flow rate conditions, varying resident fluid concentrations, and internal probes that measured the pressure and salt mass fraction. Parameter estimation is performed on a subset of the experimental data for the TCAT model as well as other existing non-dilute flow and transport models. The optimized parameters are then used for forward simulations and the accuracy of the models is compared.

  14. Users' guide to system dynamics model describing Coho salmon survival in Olema Creek, Point Reyes National Seashore, Marin County, California

    Science.gov (United States)

    Woodward, Andrea; Torregrosa, Alicia; Madej, Mary Ann; Reichmuth, Michael; Fong, Darren

    2014-01-01

    The system dynamics model described in this report is the result of a collaboration between U.S. Geological Survey (USGS) scientists and National Park Service (NPS) San Francisco Bay Area Network (SFAN) staff, whose goal was to develop a methodology to integrate inventory and monitoring data to better understand ecosystem dynamics and trends using salmon in Olema Creek, Marin County, California, as an example case. The SFAN began monitoring multiple life stages of coho salmon (Oncorhynchus kisutch) in Olema Creek during 2003 (Carlisle and others, 2013), building on previous monitoring of spawning fish and redds. They initiated water-quality and habitat monitoring, and had access to flow and weather data from other sources. This system dynamics model of the freshwater portion of the coho salmon life cycle in Olema Creek integrated 8 years of existing monitoring data, literature values, and expert opinion to investigate potential factors limiting survival and production, identify data gaps, and improve monitoring and restoration prescriptions. A system dynamics model is particularly effective when (1) data are insufficient in time series length and/or measured parameters for a statistical or mechanistic model, and (2) the model must be easily accessible by users who are not modelers. These characteristics helped us meet the following overarching goals for this model: Summarize and synthesize NPS monitoring data with data and information from other sources to describe factors and processes affecting freshwater survival of coho salmon in Olema Creek. Provide a model that can be easily manipulated to experiment with alternative values of model parameters and novel scenarios of environmental drivers. Although the model describes the ecological dynamics of Olema Creek, these dynamics are structurally similar to numerous other coastal streams along the California coast that also contain anadromous fish populations. The model developed for Olema can be used, at least as a

  15. An analytical mechanical model to describe the response of NiTi rotary endodontic files in a curved root canal

    International Nuclear Information System (INIS)

    Leroy, Agnès Marie Françoise; Bahia, Maria Guiomar de Azevedo; Ehrlacher, Alain; Buono, Vicente Tadeu Lopes

    2012-01-01

    Aim: To build a mathematical model describing the mechanical behavior of NiTi rotary files while they are rotating in a root canal. Methodology: The file was seen as a beam undergoing large transformations. The instrument was assumed to be rotating steadily in the root canal, and the geometry of the canal was considered as a known parameter of the problem. The formulae of large transformations mechanics then allowed the calculation of the Green–Lagrange strain field in the file. The non-linear mechanical behavior of NiTi was modeled as a continuous piecewise linear function, assuming that the material did not reach plastic deformation. Criteria locating the changes of behavior of NiTi were established and the tension field in the file, and the external efforts applied on it were calculated. The unknown variable of torsion was deduced from the equilibrium equation system using a Coulomb contact law which solved the problem on a cycle of rotation. Results: In order to verify that the model described well reality, three-point bending experiments were managed on superelastic NiTi wires, whose results were compared to the theoretical ones. It appeared that the model gave a good mentoring of the empirical results in the range of bending angles that interested us. Conclusions: Knowing the geometry of the root canal, one is now able to write the equations of the strain and stress fields in the endodontic instrument, and to quantify the impact of each macroscopic parameter of the problem on its response. This should be useful to predict failure of the files under rotating bending fatigue, and to optimize the geometry of the files. - Highlights: ► A mechanical model of the behavior of a NiTi endodontic instrument was developed. ► The model was validated with results of three-point bending tests on NiTi wires. ► The model is appropriate for the optimization of instruments' geometry.

  16. Evaluation of a cross contamination model describing transfer of Salmonella spp. and Listeria monocytogenes during grinding of pork and beef.

    Science.gov (United States)

    Møller, C O A; Sant'Ana, A S; Hansen, S K H; Nauta, M J; Silva, L P; Alvarenga, V O; Maffei, D; Silva, F F P; Lopes, J T; Franco, B D G M; Aabo, S; Hansen, T B

    2016-06-02

    In a previous study, a model was developed to describe the transfer and survival of Salmonella during grinding of pork (Møller, C.O.A., Nauta, M.J., Christensen, B.B., Dalgaard, P., Hansen, T.B., 2012. Modelling transfer of Salmonella typhimurium DT104 during simulation of grinding of pork. Journal of Applied Microbiology 112 (1), 90-98). The robustness of this model is now evaluated by studying its performance for predicting the transfer and survival of Salmonella spp. and Listeria monocytogenes during grinding of different types of meat (pork and beef), using two different grinders, different sizes and different numbers of pieces of meats to be ground. A total of 19 grinding trials were collected. Acceptable Simulation Zone (ASZ), visual inspection of the data, Quantitative Microbiological Risk Assessment (QMRA), as well as the Total Transfer Potential (TTP) were used as approaches to evaluate model performance and to access the quality of the cross contamination model predictions. Using the ASZ approach and considering that 70% of the observed counts have to be inside a defined acceptable zone of ±0.5 log10CFU per portion, it was found that the cross contamination parameters suggested by Møller et al. (2012) were not able to describe all 19 trials. However, for each of the collected grinding trials, the transfer event was well described when fitted to the model structure proposed by Møller et al. (2012). Parameter estimates obtained by fitting observed trials performed at different conditions, such as size and number of pieces of meat to be ground, may not be applied to describe cross contamination of unlike processing. Nevertheless, the risk estimates, as well as the TTP, revealed that the risk of disease may be reduced when the grinding of meat is performed in a grinder made of stainless steel (for all surfaces in contact with the meat), using a well-sharpened knife and holding at room temperatures lower than 4°C. Copyright © 2016 Elsevier B.V. All

  17. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty) relationship of extension of one......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...

  18. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  19. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  20. Climate Ocean Modeling on Parallel Computers

    Science.gov (United States)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  1. Computational Intelligence. Mortality Models for the Actuary

    NARCIS (Netherlands)

    Willemse, W.J.

    2001-01-01

    This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's

  2. Evaluation of a cross contamination model describing transfer of salmonella spp. and listeria monocytogenes during grinding of pork and beef

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Hansen, Tina Beck; Aabo, Søren

    2015-01-01

    Introduction: The cross contamination model (Møller et al. 2012) was evaluated to investigate its capability of describing transfer of Salmonella spp. and Listeria monocytogenes during grinding of pork and beef of varying sizes (50 – 324 g) and numbers of pieces to be ground (10 – 100), in two...... processing. QMRA risk estimates and TTP both revealed that risk attribution from grinding was mainly influenced by sharpness of grinder knife > specific grinder > grinding temperature whereas the specific pathogen was of minor importance....

  3. Biological effectiveness and application of heavy ions in radiation therapy described by a physical and biological model

    International Nuclear Information System (INIS)

    Olsen, K.J.; Hansen, J.W.

    1982-12-01

    A description is given of the physical basis for applying track structure theory in the determination of the effectiveness of heavy-ion irradiation of single- and multi-hit target systems. It will be shown that for applying the theory to biological systems the effectiveness of heavy-ion irradiation is inadequately described by an RBE-factor, whereas the complete formulation of the probability of survival must be used, as survival depends on both radiation quality and dose. The theoretical model of track structure can be used in dose-effect calculations for neutron-, high-LET, and low-LET radiation applied simultaneously in therapy. (author)

  4. Applying a physical continuum model to describe the broadband X-ray spectra of accreting pulsars at high luminosity

    Science.gov (United States)

    Pottschmidt, Katja; Hemphill, Paul B.; Wolff, Michael T.; Cheatham, Diana M.; Iwakiri, Wataru; Gottlieb, Amy M.; Falkner, Sebastian; Ballhausen, Ralf; Fuerst, Felix; Kuehnel, Matthias; Ferrigno, Carlo; Becker, Peter A.; Wood, Kent S.; Wilms, Joern

    2018-01-01

    A new window for better understanding the accretion onto strongly magnetized neutron stars in X-ray binaries is opening. In these systems the accreted material follows the magnetic field lines as it approaches the neutron star, forming accretion columns above the magnetic poles. The plasma falls toward the neutron star surface at near-relativistic speeds, losing energy by emitting X-rays. The X-ray spectral continua are commonly described using phenomenological models, i.e., power laws with different types of curved cut-offs at higher energies. Here we consider high luminosity pulsars. In these systems the mass transfer rate is high enough that the accreting plasma is thought to be decelerated in a radiation-dominated radiative shock in the accretion columns. While the theory of the emission from such shocks had already been developed by 2007, a model for direct comparison with X-ray continuum spectra in xspec or isis has only recently become available. Characteristic parameters of this model are the accretion column radius and the plasma temperature, among others. Here we analyze the broadband X-ray spectra of the accreting pulsars Centaurus X-3 and 4U 1626-67 obtained with NuSTAR. We present results from traditional empirical modeling as well as successfully apply the radiation-dominated radiative shock model. We also take the opportunity to compare to similar recent analyses of both sources using these and other observations.

  5. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  8. A Novel Dynamic Model Describing the Spread of the MERS-CoV and the Expression of Dipeptidyl Peptidase 4

    Directory of Open Access Journals (Sweden)

    Siming Tang

    2017-01-01

    Full Text Available The Middle East respiratory syndrome (MERS coronavirus, a newly identified pathogen, causes severe pneumonia in humans. MERS is caused by a coronavirus known as MERS-CoV, which attacks the respiratory system. The recently defined receptor for MERS-CoV, dipeptidyl peptidase 4 (DPP4, is generally expressed in endothelial and epithelial cells and has been shown to be present on cultured human nonciliated bronchiolar epithelium cells. In this paper, a class of novel four-dimensional dynamic model describing the infection of MERS-CoV is given, and then global stability of the equilibria of the model is discussed. Our results show that the spread of MERS-CoV can also be controlled by decreasing the expression rate of DPP4.

  9. A mathematical model describing the glycemic response of diabetic patients to meal and i.v. infusion of insulin.

    Science.gov (United States)

    Fabietti, P G; Calabrese, G; Iorio, M; Bistoni, S; Brunetti, P; Sarti, E; Benedetti, M M

    2001-10-01

    Nine type 1 diabetic patients were studied for 24 hours. During this period they were given three calibrated meals. The glycemia was feedback-controlled by means of an artificial pancreas. The blood concentration of glucose and the infusion speed of the insulin were measured every minute. The experimental data referring to each of the three meals were used to estimate the parameters of a mathematical model suitable for describing the glycemic response of diabetic patients at meals and at the i.v. infusion of exogenous insulin. From the estimate a marked dispersion of the parameters was found, both interindividual and intraindividual. Nevertheless the models thus obtained seem to be usable for the synthesis of a feedback controller, especially in view of creating a portable artificial pancreas that now seems possible owing to the realization (so far experimental) of sufficiently reliable glucose concentration sensors.

  10. An analytical mechanical model to describe the response of NiTi rotary endodontic files in a curved root canal.

    Science.gov (United States)

    Leroy, Agnès Marie Françoise; Bahia, Maria Guiomar de Azevedo; Ehrlacher, Alain; Buono, Vicente Tadeu Lopes

    2012-08-01

    To build a mathematical model describing the mechanical behavior of NiTi rotary files while they are rotating in a root canal. The file was seen as a beam undergoing large transformations. The instrument was assumed to be rotating steadily in the root canal, and the geometry of the canal was considered as a known parameter of the problem. The formulae of large transformations mechanics then allowed the calculation of the Green-Lagrange strain field in the file. The non-linear mechanical behavior of NiTi was modeled as a continuous piecewise linear function, assuming that the material did not reach plastic deformation. Criteria locating the changes of behavior of NiTi were established and the tension field in the file, and the external efforts applied on it were calculated. The unknown variable of torsion was deduced from the equilibrium equation system using a Coulomb contact law which solved the problem on a cycle of rotation. In order to verify that the model described well reality, three-point bending experiments were managed on superelastic NiTi wires, whose results were compared to the theoretical ones. It appeared that the model gave a good mentoring of the empirical results in the range of bending angles that interested us. Knowing the geometry of the root canal, one is now able to write the equations of the strain and stress fields in the endodontic instrument, and to quantify the impact of each macroscopic parameter of the problem on its response. This should be useful to predict failure of the files under rotating bending fatigue, and to optimize the geometry of the files. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. An analytical mechanical model to describe the response of NiTi rotary endodontic files in a curved root canal

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, Agnes Marie Francoise [Department of Metallurgical and Materials Engineering, School of Engineering, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Department of Mechanical and Materials Engineering, Ecole des Ponts Paristech (ENPC), Champs-sur-Marne (France); Bahia, Maria Guiomar de Azevedo [Department of Restorative Dentistry, Faculty of Dentistry, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Ehrlacher, Alain [Department of Mechanical and Materials Engineering, Ecole des Ponts Paristech (ENPC), Champs-sur-Marne (France); Buono, Vicente Tadeu Lopes, E-mail: vbuono@demet.ufmg.br [Department of Metallurgical and Materials Engineering, School of Engineering, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)

    2012-08-01

    Aim: To build a mathematical model describing the mechanical behavior of NiTi rotary files while they are rotating in a root canal. Methodology: The file was seen as a beam undergoing large transformations. The instrument was assumed to be rotating steadily in the root canal, and the geometry of the canal was considered as a known parameter of the problem. The formulae of large transformations mechanics then allowed the calculation of the Green-Lagrange strain field in the file. The non-linear mechanical behavior of NiTi was modeled as a continuous piecewise linear function, assuming that the material did not reach plastic deformation. Criteria locating the changes of behavior of NiTi were established and the tension field in the file, and the external efforts applied on it were calculated. The unknown variable of torsion was deduced from the equilibrium equation system using a Coulomb contact law which solved the problem on a cycle of rotation. Results: In order to verify that the model described well reality, three-point bending experiments were managed on superelastic NiTi wires, whose results were compared to the theoretical ones. It appeared that the model gave a good mentoring of the empirical results in the range of bending angles that interested us. Conclusions: Knowing the geometry of the root canal, one is now able to write the equations of the strain and stress fields in the endodontic instrument, and to quantify the impact of each macroscopic parameter of the problem on its response. This should be useful to predict failure of the files under rotating bending fatigue, and to optimize the geometry of the files. - Highlights: Black-Right-Pointing-Pointer A mechanical model of the behavior of a NiTi endodontic instrument was developed. Black-Right-Pointing-Pointer The model was validated with results of three-point bending tests on NiTi wires. Black-Right-Pointing-Pointer The model is appropriate for the optimization of instruments' geometry.

  12. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    where x increases from zero to N, the saturation value. Box 1. Matrix Meth- ... such as Laplace transforms and non-linear differential equa- tions with .... atomic bomb project in the. US in the early ... his work on game theory and computers.

  13. A two-phase model to describe the dissolution of ZrO2 by molten Zr

    International Nuclear Information System (INIS)

    Belloni, J.; Fichot, F.; Goyeau, B.; Gobin, D.; Quintard, M.

    2007-01-01

    In case of a hypothetical severe accident in a nuclear Pressurized Water Reactor (PWR), the fuel elements in the core may reach very high temperatures (more than 2000 K). UO 2 (Uranium dioxide) pellets are enclosed by a cladding mainly composed of Zircaloy (Zr). If the temperature became higher than 2100 K (melting temperature of Zr), the UO 2 pellets would be in contact with molten Zr, resulting in the dissolution and liquefaction of UO 2 at a lower temperature than its melting points (3100 K). Several experimental and numerical investigations have led to a better understanding of this phenomenon but a comprehensive and consistent modeling is still missing. The goal of this paper is to propose a two-phase macroscopic model describing the dissolution of a solid alloy by a liquid. The model is limited to binary alloys and it is applied to the particular case of the dissolution of ZrO 2 by liquid Zr, for which experimental data are available (Hofmann et al., 1999). The model was established by using a volume averaging method. Numerical simulations are compared to experimental results and show a good agreement. (authors)

  14. QSAR models for describing the toxicological effects of ILs against Staphylococcus aureus based on norm indexes.

    Science.gov (United States)

    He, Wensi; Yan, Fangyou; Jia, Qingzhu; Xia, Shuqian; Wang, Qiang

    2018-03-01

    The hazardous potential of ionic liquids (ILs) is becoming an issue of great concern due to their important role in many industrial fields as green agents. The mathematical model for the toxicological effects of ILs is useful for the risk assessment and design of environmentally benign ILs. The objective of this work is to develop QSAR models to describe the minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) of ILs against Staphylococcus aureus (S. aureus). A total of 169 and 101 ILs with MICs and MBCs, respectively, are used to obtain multiple linear regression models based on matrix norm indexes. The norm indexes used in this work are proposed by our research group and they are first applied to estimate the antibacterial toxicity of these ILs against S. aureus. These two models precisely and reliably calculated the IL toxicities with a square of correlation coefficient (R 2 ) of 0.919 and a standard error of estimate (SE) of 0.341 (in log unit of mM) for pMIC, and an R 2 of 0.913 and SE of 0.282 for pMBC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  16. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  17. Social media modeling and computing

    CERN Document Server

    Hoi, Steven CH; Boll, Susanne; Xu, Dong; Jin, Rong; King, Irwin

    2011-01-01

    Presents contributions from an international selection of preeminent experts in the field Discusses topics on social-media content analysis, and examines social-media system design and analysis Describes emerging applications of social media

  18. Three new species of Carychium O.F. Müller, 1773 from the Southeastern USA, Belize and Panama are described using computer tomography (CT) (Eupulmonata, Ellobioidea, Carychiidae).

    Science.gov (United States)

    Jochum, Adrienne; Weigand, Alexander M; Bochud, Estee; Inäbnit, Thomas; Dörge, Dorian D; Ruthensteiner, Bernhard; Favre, Adrien; Martels, Gunhild; Kampschulte, Marian

    2017-01-01

    Three new species of the genus Carychium O.F. Müller, 1773, Carychium hardiei Jochum & Weigand, sp. n. , Carychium belizeense Jochum & Weigand, sp. n. and Carychium zarzaae Jochum & Weigand, sp. n. are described from the Southeastern United States, Belize and Panama, respectively. In two consecutive molecular phylogenetic studies of worldwide members of Carychiidae, the North and Central American morphospecies Carychium mexicanum Pilsbry, 1891 and Carychium costaricanum E. von Martens, 1898 were found to consist of several evolutionary lineages. Although the related lineages were found to be molecularly distinct from the two nominal species, the consequential morphological and taxonomic assessment of these lineages is still lacking. In the present paper, the shells of these uncovered Carychium lineages are assessed by comparing them with those of related species, using computer tomography for the first time for this genus. The interior diagnostic characters are emphasized, such as columellar configuration in conjunction with the columellar lamella and their relationship in context of the entire shell. These taxa are morphologically described and formally assigned their own names.

  19. Three new species of Carychium O.F. Müller, 1773 from the Southeastern USA, Belize and Panama are described using computer tomography (CT (Eupulmonata, Ellobioidea, Carychiidae

    Directory of Open Access Journals (Sweden)

    Adrienne Jochum

    2017-05-01

    Full Text Available Three new species of the genus Carychium O.F. Müller, 1773, Carychium hardiei Jochum & Weigand, sp. n., Carychium belizeense Jochum & Weigand, sp. n. and Carychium zarzaae Jochum & Weigand, sp. n. are described from the Southeastern United States, Belize and Panama, respectively. In two consecutive molecular phylogenetic studies of worldwide members of Carychiidae, the North and Central American morphospecies Carychium mexicanum Pilsbry, 1891 and Carychium costaricanum E. von Martens, 1898 were found to consist of several evolutionary lineages. Although the related lineages were found to be molecularly distinct from the two nominal species, the consequential morphological and taxonomic assessment of these lineages is still lacking. In the present paper, the shells of these uncovered Carychium lineages are assessed by comparing them with those of related species, using computer tomography for the first time for this genus. The interior diagnostic characters are emphasized, such as columellar configuration in conjunction with the columellar lamella and their relationship in context of the entire shell. These taxa are morphologically described and formally assigned their own names.

  20. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    Science.gov (United States)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  1. Indirect estimation of the Convective Lognormal Transfer function model parameters for describing solute transport in unsaturated and undisturbed soil.

    Science.gov (United States)

    Mohammadi, Mohammad Hossein; Vanclooster, Marnik

    2012-05-01

    Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  3. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  4. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  5. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  6. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  7. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  8. Modelling of turbulent hydrocarbon combustion. Test of different reactor concepts for describing the interactions between turbulence and chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, C; Kremer, H [Ruhr-Universitaet Bochum, Lehrstuhl fuer Energieanlagentechnik, Bochum (Germany); Kilpinen, P; Hupa, M [Aabo Akademi, Turku (Finland). Combustion Chemistry Research Group

    1998-12-31

    The detailed modelling of turbulent reactive flows with CFD-codes is a major challenge in combustion science. One method of combining highly developed turbulence models and detailed chemistry in CFD-codes is the application of reactor based turbulence chemistry interaction models. In this work the influence of different reactor concepts on methane and NO{sub x} chemistry in turbulent reactive flows was investigated. Besides the classical reactor approaches, a plug flow reactor (PFR) and a perfectly stirred reactor (PSR), the Eddy-Dissipation Combustion Model (EDX) and the Eddy Dissipation Concept (EDC) were included. Based on a detailed reaction scheme and a simplified 2-step mechanism studies were performed in a simplified computational grid consisting of 5 cells. The investigations cover a temperature range from 1273 K to 1673 K and consider fuel-rich and fuel-lean gas mixtures as well as turbulent and highly turbulent flow conditions. All test cases investigated in this study showed a strong influence of the reactor residence time on the species conversion processes. Due to this characteristic strong deviations were found for the species trends resulting from the different reactor approaches. However, this influence was only concentrated on the `near burner region` and after 4-5 cells hardly any deviation and residence time dependence could be found. The importance of the residence time dependence increased when the species conversion was accelerated as it is the case for overstoichiometric combustion conditions and increased temperatures. The study focused furthermore on the fine structure in the EDC. Unlike the classical approach this part of the cell was modelled as a PFR instead of a PSR. For high temperature conditions there was hardly any difference between both reactor types. However, decreasing the temperature led to obvious deviations. Finally, the effect of the selective species transport between the cells on the conversion process was investigated

  9. Modelling of turbulent hydrocarbon combustion. Test of different reactor concepts for describing the interactions between turbulence and chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, C.; Kremer, H. [Ruhr-Universitaet Bochum, Lehrstuhl fuer Energieanlagentechnik, Bochum (Germany); Kilpinen, P.; Hupa, M. [Aabo Akademi, Turku (Finland). Combustion Chemistry Research Group

    1997-12-31

    The detailed modelling of turbulent reactive flows with CFD-codes is a major challenge in combustion science. One method of combining highly developed turbulence models and detailed chemistry in CFD-codes is the application of reactor based turbulence chemistry interaction models. In this work the influence of different reactor concepts on methane and NO{sub x} chemistry in turbulent reactive flows was investigated. Besides the classical reactor approaches, a plug flow reactor (PFR) and a perfectly stirred reactor (PSR), the Eddy-Dissipation Combustion Model (EDX) and the Eddy Dissipation Concept (EDC) were included. Based on a detailed reaction scheme and a simplified 2-step mechanism studies were performed in a simplified computational grid consisting of 5 cells. The investigations cover a temperature range from 1273 K to 1673 K and consider fuel-rich and fuel-lean gas mixtures as well as turbulent and highly turbulent flow conditions. All test cases investigated in this study showed a strong influence of the reactor residence time on the species conversion processes. Due to this characteristic strong deviations were found for the species trends resulting from the different reactor approaches. However, this influence was only concentrated on the `near burner region` and after 4-5 cells hardly any deviation and residence time dependence could be found. The importance of the residence time dependence increased when the species conversion was accelerated as it is the case for overstoichiometric combustion conditions and increased temperatures. The study focused furthermore on the fine structure in the EDC. Unlike the classical approach this part of the cell was modelled as a PFR instead of a PSR. For high temperature conditions there was hardly any difference between both reactor types. However, decreasing the temperature led to obvious deviations. Finally, the effect of the selective species transport between the cells on the conversion process was investigated

  10. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  11. Computational modeling of intraocular gas dynamics

    International Nuclear Information System (INIS)

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-01-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF_6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF_6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF_6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF_6 is 1.4 times more than that of using diluted SF_6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. (paper)

  12. Gaze patterns reveal how texts are remembered: A mental model of what was described is favored over the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    or incongruent with the spatial layout of the text itself. 28 participants read and recalled three texts: (1) a scene description congruent with the spatial layout of the text; (2) a scene description incongruent with the spatial layout of the text; and (3) a control text without any spatial scene content....... Recollection was performed orally while gazing at a blank screen. 
Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. We conclude that participants formed a mental model that represents the content...... of what was described, i.e., visuospatial information of the scene, which then guided the retrieval process. During their retellings, they moved the eyes across the blank screen as if they saw the scene in front of them. Whereas previous studies on the involvement of eye movements in mental imagery tasks...

  13. Marquardt's Phi mask: pitfalls of relying on fashion models and the golden ratio to describe a beautiful face.

    Science.gov (United States)

    Holland, E

    2008-03-01

    Stephen Marquardt has derived a mask from the golden ratio that he claims represents the "ideal" facial archetype. Many have found his mask convincing, including cosmetic surgeons. However, Marquardt's mask is associated with numerous problems. The method used to examine goodness of fit with the proportions in the mask is faulty. The mask is ill-suited for non-European populations, especially sub-Saharan Africans and East Asians. The mask also appears to approximate the face shape of masculinized European women. Given that the general public strongly and overwhelmingly prefers above average facial femininity in women, white women seeking aesthetic facial surgery would be ill-advised to aim toward a better fit with Marquardt's mask. This article aims to show the proper way of assessing goodness of fit with Marquardt's mask, to address the shape of the mask as it pertains to masculinity-femininity, and to discuss the broader issue of an objective assessment of facial attractiveness. Generalized Procrustes analysis is used to show how goodness of fit with Marquardt's mask can be assessed. Thin-plate spline analysis is used to illustrate visually how sample faces, including northwestern European averages, differ from Marquardt's mask. Marquardt's mask best describes the facial proportions of masculinized white women as seen in fashion models. Marquardt's mask does not appear to describe "ideal" face shape even for white women because its proportions are inconsistent with the optimal preferences of most people, especially with regard to femininity.

  14. Can model Hamiltonians describe the electron–electron interaction in π-conjugated systems?: PAH and graphene

    International Nuclear Information System (INIS)

    Chiappe, G; Louis, E; San-Fabián, E; Vergés, J A

    2015-01-01

    Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an ‘effective’ Hamiltonian including only on-site interactions (Hubbard)? The

  15. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  16. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  17. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  18. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  19. Application of inverse modeling technique to describe hydrogeochemical processes responsible to spatial distribution of groundwater quality along flowpath

    Directory of Open Access Journals (Sweden)

    Tjahyo NugrohoAdji

    2013-07-01

    The result shows that firstly, the aquifer within the research area can be grouped into several aquifer systems (i.e. denudational hill, colluvial plain, alluvial plain, and beach ridges from recharge to discharge which generally have potential groundwater resources in terms of the depth and fluctuation of groundwater table. Secondly, flownets analysis gives three flowpaths that are plausible to be modeled in order to describe their hydrogeochemical reactions. Thirdly, the Saturation Indices (SI analysis shows that there are a positive correlation between the mineral occurrence and composition and the value of SI from recharge to discharge. In addition, The Mass Balance Model indicates that dissolution and precipitation of aquifer minerals is dominantly change the chemical composition along flowpath and the rate of the mass transfer between two wells shows a discrepancy and be certain of the percentage of the nature of aquifer mineral. Lastly, there is an interesting characteristic of mass balance chemical reaction occurs which is the entire chemical reaction shows that the sum of smallest mineral fmmol/litre will firstly always totally be reacted.

  20. Models describing mackerel (Scomber scombrus early life growth in the North and Northwest of the Iberian Peninsula in 2000

    Directory of Open Access Journals (Sweden)

    Begoña Villamor

    2004-12-01

    Full Text Available Mackerel (Scomber scombrus in early life stages were captured in 2000 in the north and northwest of the Iberian Peninsula (ICES Divisions VIIIc and IXa North. Daily rings on their otolith sagittae were identified. Otoliths from 377 larvae and post-larvae caught in April and May 2000, ranging in length from 2.3 to 23.7 mm LS (Standard length and ranging in age from 7 to 38 days after hatching were analysed. Additionally, 68 otoliths from juveniles and pre-recruits caught between July and October 2000 with a length range of 121-202 mm LS and aged between 65-186 days after hatching were analysed. Gompertz and Logistic growth models were fitted to the pooled length at age data of the larvae-postlarvae and juveniles-pre-recruits. As length at hatch is assumed in the literature to be 3.0 mm, the models were applied in two ways; not forced to pass through L0=3.0 mm and forced to pass through L0=3.0 mm. The unforced logistic growth curve appeared to be the most suitable for describing growth during the first year of life of mackerel (L? = 191.6 mm; K= 0.070; t0= 66.7 d.

  1. Computational challenges in modeling gene regulatory events.

    Science.gov (United States)

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  2. Fuel element performance computer modelling

    International Nuclear Information System (INIS)

    Locke, D.H.

    1978-01-01

    The meeting was attended by 88 participants from 17 countries. Altogether 47 papers were presented. The majority of the presentations contained a description of the equations and solutions used to describe and evaluate some of the physical processes taking place in water reactor fuel pins under irradiation. At the same time, particular attention was paid to the ''bench marking'' of the codes wherein solutions arrived at for particular experiments are compared with the results at the experiments

  3. An extended model based on the modified Nernst-Planck equation for describing transdermal iontophoresis of weak electrolytes.

    Science.gov (United States)

    Imanidis, Georgios; Luetolf, Peter

    2006-07-01

    An extended model for iontophoretic enhancement of transdermal drug permeation under constant voltage is described based on the previously modified Nernst-Planck equation, which included the effect of convective solvent flow. This model resulted in an analytical expression for the enhancement factor as a function of applied voltage, convective flow velocity due to electroosmosis, ratio of lipid to aqueous pathway passive permeability, and weighted average net ionic valence of the permeant in the aqueous epidermis domain. The shift of pH in the epidermis compared to bulk caused by the electrical double layer at the lipid-aqueous domain interface was evaluated using the Poisson-Boltzmann equation. This was solved numerically for representative surface charge densities and yielded pH differences between bulk and epidermal aqueous domain between 0.05 and 0.4 pH units. The developed model was used to analyze the experimental enhancement of an amphoteric weak electrolyte measured in vitro using human cadaver epidermis and a voltage of 250 mV at different pH values. Parameter values characterizing the involved factors were determined that yielded the experimental enhancement factors and passive permeability coefficients at all pH values. The model provided a very good agreement between experimental and calculated enhancement and passive permeability. The deduced parameters showed (i) that the pH shift in the aqueous permeation pathway had a notable effect on the ionic valence and the partitioning of the drug in this domain for a high surface charge density and depending on the pK(a) and pI of the drug in relation to the bulk pH; (ii) the magnitude and the direction of convective transport due to electroosmosis typically reflected the density and sign, respectively, of surface charge of the tissue and its effect on enhancement was substantial for bulk pH values differing from the pI of epidermal tissue; (iii) the aqueous pathway predominantly determined passive

  4. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  5. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  6. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  7. Computer Aided Multi-Data Fusion Dismount Modeling

    Science.gov (United States)

    2012-03-22

    dependent on a particular environmental condition. They are costly, cumbersome, and involve dedicated software practices and particular knowledge to operate...allow manipulation of 2D matrices, like Microsoft Excel or Libre Office. The second alternative is to modify an already created model (MEM). The model... software . Therefore, with the described computer aided multi-data dismount model the researcher will be able to attach signatures to any desired

  8. Computational and Organotypic Modeling of Microcephaly ...

    Science.gov (United States)

    Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computational and experimental models to probe the underlying molecular targets, cellular consequences, and biological processes. We describe an Adverse Outcome Pathway (AOP) framework for microcephaly derived from literature on all gene-, chemical-, or viral- effects and brain development. Overlap with NTDs is likely, although the AOP connections identified here focused on microcephaly as the adverse outcome. A query of the Mammalian Phenotype Browser database for ‘microcephaly’ (MP:0000433) returned 85 gene associations; several function in microtubule assembly and centrosome cycle regulated by (microcephalin, MCPH1), a gene for primary microcephaly in humans. The developing ventricular zone is the likely target. In this zone, neuroprogenitor cells (NPCs) self-replicate during the 1st trimester setting brain size, followed by neural differentiation of the neocortex. Recent studies with human NPCs confirmed infectivity with Zika virions invoking critical cell loss (apoptosis) of precursor NPCs; similar findings have been shown with fetal alcohol or methylmercury exposure in rodent studies, leading to mathematical models of NPC dynamics in size determination of the ventricular zone. A key event

  9. Computer modeling of the Cabriolet Event

    International Nuclear Information System (INIS)

    Kamegai, M.

    1979-01-01

    Computer modeling techniques are described for calculating the results of underground nuclear explosions at depths shallow enough to produce cratering. The techniques are applied to the Cabriolet Event, a well-documented nuclear excavation experiment, and the calculations give good agreement with the experimental results. It is concluded that, given data obtainable by outside observers, these modeling techniques are capable of verifying the yield and depth of underground nuclear cratering explosions, and that they could thus be useful in monitoring another country's compliance with treaty agreements on nuclear testing limitations. Several important facts emerge from the study: (1) seismic energy is produced by only a fraction of the nuclear yield, a fraction depending strongly on the depth of shot and the mechanical properties of the surrounding rock; (2) temperature of the vented gas can be predicted accurately only if good equations of state are available for the rock in the detonation zone; and (3) temperature of the vented gas is strongly dependent on the cooling effect, before venting, of mixing with melted rock in the expanding cavity and, to a lesser extent, on the cooling effect of water in the rock

  10. A physically-based analytical model to describe effective excess charge for streaming potential generation in saturated porous media

    Science.gov (United States)

    Jougnot, D.; Guarracino, L.

    2016-12-01

    The self-potential (SP) method is considered by most researchers the only geophysical method that is directly sensitive to groundwater flow. One source of SP signals, the so-called streaming potential, results from the presence of an electrical double layer at the mineral-pore water interface. When water flows through the pore space, it gives rise to a streaming current and a resulting measurable electrical voltage. Different approaches have been proposed to predict streaming potentials in porous media. One approach is based on the excess charge which is effectively dragged in the medium by the water flow. Following a recent theoretical framework, we developed a physically-based analytical model to predict the effective excess charge in saturated porous media. In this study, the porous media is described by a bundle of capillary tubes with a fractal pore-size distribution. First, an analytical relationship is derived to determine the effective excess charge for a single capillary tube as a function of the pore water salinity. Then, this relationship is used to obtain both exact and approximated expressions for the effective excess charge at the Representative Elementary Volume (REV) scale. The resulting analytical relationship allows the determination of the effective excess charge as a function of pore water salinity, fractal dimension and hydraulic parameters like porosity and permeability, which are also obtained at the REV scale. This new model has been successfully tested against data from the literature of different sources. One of the main finding of this study is that it provides a mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by various researchers. The proposed petrophysical relationship also contributes to understand the role of porosity and water salinity on effective excess charge and will help to push further the use of streaming potential to monitor groundwater flow.

  11. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,

  12. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  13. Mechanisms for improving mass transfer in food with ultrasound technology: Describing the phenomena in two model cases.

    Science.gov (United States)

    Miano, Alberto Claudio; Ibarz, Albert; Augusto, Pedro Esteves Duarte

    2016-03-01

    The aim of this work was to demonstrate how ultrasound mechanisms (direct and indirect effects) improve the mass transfer phenomena in food processing, and which part of the process they are more effective in. Two model cases were evaluated: the hydration of sorghum grain (with two water activities) and the influx of a pigment into melon cylinders. Different treatments enabled us to evaluate and discriminate both direct (inertial flow and "sponge effect") and indirect effects (micro channel formation), alternating pre-treatments and treatments using an ultrasonic bath (20 kHz of frequency and 28 W/L of volumetric power) and a traditional water-bath. It was demonstrated that both the effects of ultrasound technology are more effective in food with higher water activity, the micro channels only forming in moist food. Moreover, micro channel formation could also be observed using agar gel cylinders, verifying the random formation of these due to cavitation. The direct effects were shown to be important in mass transfer enhancement not only in moist food, but also in dry food, this being improved by the micro channels formed and the porosity of the food. In conclusion, the improvement in mass transfer due to direct and indirect effects was firstly discriminated and described. It was proven that both phenomena are important for mass transfer in moist foods, while only the direct effects are important for dry foods. Based on these results, better processing using ultrasound technology can be obtained. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Computational modeling and engineering in pediatric and congenital heart disease.

    Science.gov (United States)

    Marsden, Alison L; Feinstein, Jeffrey A

    2015-10-01

    Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single-ventricle patients, and provide an overview of emerging areas. Multiscale modeling combining patient-specific hemodynamics with reduced order (i.e., mathematically and computationally simplified) circulatory models has become the de-facto standard for modeling local hemodynamics and 'global' circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods (e.g., fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally derived surgical methods for single-ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot (and pulmonary tree), and circulatory support. Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases.

  15. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  16. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  17. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  18. Computational modeling of epiphany learning.

    Science.gov (United States)

    Chen, Wei James; Krajbich, Ian

    2017-05-02

    Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

  19. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  20. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  1. Computer modelling as a tool for understanding language evolution

    NARCIS (Netherlands)

    de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D

    2006-01-01

    This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the

  2. Computer model for noise in the dc Squid

    International Nuclear Information System (INIS)

    Tesche, C.D.; Clarke, J.

    1976-08-01

    A computer model for the dc SQUID is described which predicts signal and noise as a function of various SQUID parameters. Differential equations for the voltage across the SQUID including the Johnson noise in the shunted junctions are integrated stepwise in time

  3. Validation of Computer Models for Homeland Security Purposes

    International Nuclear Information System (INIS)

    Schweppe, John E.; Ely, James; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-01-01

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources

  4. One-dimensional Fermi accelerator model with moving wall described by a nonlinear van der Pol oscillator.

    Science.gov (United States)

    Botari, Tiago; Leonel, Edson D

    2013-01-01

    A modification of the one-dimensional Fermi accelerator model is considered in this work. The dynamics of a classical particle of mass m, confined to bounce elastically between two rigid walls where one is described by a nonlinear van der Pol type oscillator while the other one is fixed, working as a reinjection mechanism of the particle for a next collision, is carefully made by the use of a two-dimensional nonlinear mapping. Two cases are considered: (i) the situation where the particle has mass negligible as compared to the mass of the moving wall and does not affect the motion of it; and (ii) the case where collisions of the particle do affect the movement of the moving wall. For case (i) the phase space is of mixed type leading us to observe a scaling of the average velocity as a function of the parameter (χ) controlling the nonlinearity of the moving wall. For large χ, a diffusion on the velocity is observed leading to the conclusion that Fermi acceleration is taking place. On the other hand, for case (ii), the motion of the moving wall is affected by collisions with the particle. However, due to the properties of the van der Pol oscillator, the moving wall relaxes again to a limit cycle. Such kind of motion absorbs part of the energy of the particle leading to a suppression of the unlimited energy gain as observed in case (i). The phase space shows a set of attractors of different periods whose basin of attraction has a complicated organization.

  5. Computational models of airway branching morphogenesis.

    Science.gov (United States)

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  7. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  8. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  9. Generating Computational Models for Serious Gaming

    NARCIS (Netherlands)

    Westera, Wim

    2018-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  10. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  11. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  12. Ewe: a computer model for ultrasonic inspection

    International Nuclear Information System (INIS)

    Douglas, S.R.; Chaplin, K.R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues

  13. Basic definitions for discrete modeling of computer worms epidemics

    Directory of Open Access Journals (Sweden)

    Pedro Guevara López

    2015-01-01

    Full Text Available The information technologies have evolved in such a way that communication between computers or hosts has become common, so much that the worldwide organization (governments and corporations depends on it; what could happen if these computers stop working for a long time is catastrophic. Unfortunately, networks are attacked by malware such as viruses and worms that could collapse the system. This has served as motivation for the formal study of computer worms and epidemics to develop strategies for prevention and protection; this is why in this paper, before analyzing epidemiological models, a set of formal definitions based on set theory and functions is proposed for describing 21 concepts used in the study of worms. These definitions provide a basis for future qualitative research on the behavior of computer worms, and quantitative for the study of their epidemiological models.

  14. Quantum Vertex Model for Reversible Classical Computing

    Science.gov (United States)

    Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng

    We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.

  15. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  16. Empirical model based on Weibull distribution describing the destruction kinetics of natural microbiota in pineapple (Ananas comosus L.) puree during high-pressure processing.

    Science.gov (United States)

    Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas

    2015-10-15

    High pressure inactivation of natural microbiota viz. aerobic mesophiles (AM), psychrotrophs (PC), yeasts and molds (YM), total coliforms (TC) and lactic acid bacteria (LAB) in pineapple puree was studied within the experimental domain of 0.1-600 MPa and 30-50 °C with a treatment time up to 20 min. A complete destruction of yeasts and molds was obtained at 500 MPa/50 °C/15 min; whereas no counts were detected for TC and LAB at 300 MPa/30 °C/15 min. A maximum of two log cycle reductions was obtained for YM during pulse pressurization at the severe process intensity of 600 MPa/50 °C/20 min. The Weibull model clearly described the non-linearity of the survival curves during the isobaric period. The tailing effect, as confirmed by the shape parameter (β) of the survival curve, was obtained in case of YM (β1) was observed for the other microbial groups. Analogous to thermal death kinetics, the activation energy (Ea, kJ·mol(-1)) and the activation volume (Va, mL·mol(-1)) values were computed further to describe the temperature and pressure dependencies of the scale parameter (δ, min), respectively. A higher δ value was obtained for each microbe at a lower temperature and it decreased with an increase in pressure. A secondary kinetic model was developed describing the inactivation rate (k, min(-1)) as a function of pressure (P, MPa) and temperature (T, K) including the dependencies of Ea and Va on P and T, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Item response theory and structural equation modelling for ordinal data: Describing the relationship between KIDSCREEN and Life-H.

    Science.gov (United States)

    Titman, Andrew C; Lancaster, Gillian A; Colver, Allan F

    2016-10-01

    Both item response theory and structural equation models are useful in the analysis of ordered categorical responses from health assessment questionnaires. We highlight the advantages and disadvantages of the item response theory and structural equation modelling approaches to modelling ordinal data, from within a community health setting. Using data from the SPARCLE project focussing on children with cerebral palsy, this paper investigates the relationship between two ordinal rating scales, the KIDSCREEN, which measures quality-of-life, and Life-H, which measures participation. Practical issues relating to fitting models, such as non-positive definite observed or fitted correlation matrices, and approaches to assessing model fit are discussed. item response theory models allow properties such as the conditional independence of particular domains of a measurement instrument to be assessed. When, as with the SPARCLE data, the latent traits are multidimensional, structural equation models generally provide a much more convenient modelling framework. © The Author(s) 2013.

  18. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  19. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    Science.gov (United States)

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  20. Goals and Values in School: A Model Developed for Describing, Evaluating and Changing the Social Climate of Learning Environments

    Science.gov (United States)

    Allodi, Mara Westling

    2010-01-01

    This paper defines a broad model of the psychosocial climate in educational settings. The model was developed from a general theory of learning environments, on a theory of human values and on empirical studies of children's evaluations of their schools. The contents of the model are creativity, stimulation, achievement, self-efficacy, creativity,…

  1. Extension of the GroIMP modelling platform to allow easy specification of differential equations describing biological processes within plant models

    NARCIS (Netherlands)

    Hemmerling, R.; Evers, J.B.; Smolenova, K.; Buck-Sorlin, G.H.; Kurth, W.

    2013-01-01

    In simulation models of plant development, physiological processes taking place in plants are typically described in terms of ODEs (Ordinary Differential Equations). On the one hand, those processes drive the development of the plant structure and on the other hand, the developed structure again

  2. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  3. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  4. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  5. Let Documents Talk to Each Other: A Computer Model for Connection of Short Documents.

    Science.gov (United States)

    Chen, Z.

    1993-01-01

    Discusses the integration of scientific texts through the connection of documents and describes a computer model that can connect short documents. Information retrieval and artificial intelligence are discussed; a prototype system of the model is explained; and the model is compared to other computer models. (17 references) (LRW)

  6. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  7. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  8. Challenges involved in the development of models describing the running hot of engines; Herausforderungen bei der Entwicklung von Motorwarmlaufmodellen

    Energy Technology Data Exchange (ETDEWEB)

    Unterguggenberger, Peter; Salbrechter, Sebastian; Jauk, Thomas; Wimmer, Andreas [Technische Univ. Graz (Austria). Inst. fuer Verbrennungskraftmaschinen und Thermodynamik (IVT)

    2012-11-01

    Currently, all potential must be tapped in order to reach the increasingly tighter CO{sub 2} limits for vehicles. From the variety of possible options for reducing fuel consumption, the contribution of improved heat management should not be ignored since increased friction during warm-up results in greater fuel consumption. Engine warm-up models that calculate thermal behavior and fuel consumption are a relatively inexpensive alternative to empirical measures. In order to achieve satisfactory simulation results, the exact modeling of thermal behavior as well as friction conditions is necessary. This paper identifies the demands placed on the individual submodels based on the requirements for precision that thermal warm-up models must meet. Before treating the friction model it will explain the development of the heat input model in detail. In addition, it presents the test program needed to establish and validate the simulation model with the required measurement accuracy. (orig.)

  9. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  10. Modeling soft factors in computer-based wargames

    Science.gov (United States)

    Alexander, Steven M.; Ross, David O.; Vinarskai, Jonathan S.; Farr, Steven D.

    2002-07-01

    Computer-based wargames have seen much improvement in recent years due to rapid increases in computing power. Because these games have been developed for the entertainment industry, most of these advances have centered on the graphics, sound, and user interfaces integrated into these wargames with less attention paid to the game's fidelity. However, for a wargame to be useful to the military, it must closely approximate as many of the elements of war as possible. Among the elements that are typically not modeled or are poorly modeled in nearly all military computer-based wargames are systematic effects, command and control, intelligence, morale, training, and other human and political factors. These aspects of war, with the possible exception of systematic effects, are individually modeled quite well in many board-based commercial wargames. The work described in this paper focuses on incorporating these elements from the board-based games into a computer-based wargame. This paper will also address the modeling and simulation of the systemic paralysis of an adversary that is implied by the concept of Effects Based Operations (EBO). Combining the fidelity of current commercial board wargames with the speed, ease of use, and advanced visualization of the computer can significantly improve the effectiveness of military decision making and education. Once in place, the process of converting board wargames concepts to computer wargames will allow the infusion of soft factors into military training and planning.

  11. A unified degree day model describes survivorship of Copitarsia corruda Pogue & Simmons (Lepidoptera: Noctuidae) at different constant temperatures

    Science.gov (United States)

    N.N. G& #243; mez; R.C. Venette; J.R. Gould; D.F. Winograd

    2009-01-01

    Predictions of survivorship are critical to quantify the probability of establishment by an alien invasive species, but survival curves rarely distinguish between the effects of temperature on development versus senescence. We report chronological and physiological age-based survival curves for a potentially invasive noctuid, recently described as Copitarsia...

  12. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  13. Radiation enhanced conduction in insulators: computer modelling

    International Nuclear Information System (INIS)

    Fisher, A.J.

    1986-10-01

    The report describes the implementation of the Klaffky-Rose-Goland-Dienes [Phys. Rev. B.21 3610,1980] model of radiation-enhanced conduction and describes the codes used. The approach is demonstrated for the data for alumina of Pells, Buckley, Hill and Murphy [AERE R.11715, 1985]. (author)

  14. Development of a mathematical model describing hydrolysis and co-fermentation of C6 and C5 sugars

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gernaey, Krist; Meyer, Anne S.

    2010-01-01

    saccharification and co-fermentation (SSCF) of C6 and C5 sugars. Model construction has been carried out by combining existing mathematical models for enzymatic hydrolysis on the one hand and co-fermentation on the other hand. An inhibition of ethanol on cellulose conversion was introduced in order to increase...

  15. The effect of em>Hydrobia ulvaeem> and microphytobenthos on cohesive sediment dynamics on an intertidal mudflat described by means of numerical modelling

    DEFF Research Database (Denmark)

    Lumborg, Ulrik; Andersen, Thorbjørn Joest; Pejrup, Morten

    2006-01-01

    been used as input to the 2D hydrodynamic numerical model MIKE 21 MT. The model was used to investigate the effect that differences in the benthic communities may have on the net deposition. The model included computation of hydrodynamics, wave fields and cohesive sediment dynamics. Based...

  16. Analytical model to describe the thermal behavior of a heat discharge system in roofs; Modelo analitico que describe el comportamiento termico de un sistema de descarga de calor en techos

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Gomez, V.H.; Contreras-Espinosa, J.J.; Gonzalez-Ortiz, G.; Morillon-Galvez, D.; Fernandez-Zayas, J.L. [Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)]. E-mail: vichugo@servidor.unam.mx; jjuancon2000@yahoo.com.mx; gilberto_gonzalez25@hotmail.com; damg@pumas.iingen.unam.mx; JFernandezZ@iingen.unam.mx

    2012-01-15

    The present study proposes an analytical model which describes the thermal behavior of a heat discharge system in roof, when the surfaces that constitute it are not translucent. Such a model derives from a thermal balance carried out to a heat discharge system in roofs. To validate it, an experimental prototype that allows simulating the thermal behavior of a heat discharge system in wall and roof was used, and the results were compared to those obtained with the proposed analytical model. It was found that the thermal behavior of the analytical model is similar to the thermal behavior of the experimental prototype; a worthless variation was detected among their respective outcome (The difference of temperatures can be caused by the heat transfer coefficient, of which no studies defining its behavior accurately have been found). Therefore, it can be considered that the proposed analytical model can be employed to simulate the thermal behavior of a heat discharge system in roofs when the surfaces that constitute it are opaque. [Spanish] En el presente estudio se propone un modelo analitico que describe el comportamiento termico de un sistema de descarga de calor en techo, cuando las superficies que lo componen no son translucidos. Dicho modelo surge a partir de un balance termico realizado a un sistema de descarga de calor en techos. Para validarlo, se realizaron dos corridas experimentales en un prototipo que permite simular el comportamiento termico de un sistema de descarga de calor en techo y se compararon los resultados medidos con los calculados por el modelo analitico propuesto. Se encontro que, el comportamiento termico del modelo analitico es similar al comportamiento termico del prototipo experimental, se detecto una variacion despreciable entre los valores arrojados por ambos modelos (la diferencia de temperaturas puede estar ocasionada por la obtencion del coeficiente convectivo de transferencia de calor, del cual no se han encontrado estudios que

  17. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  18. The Atkinson-Shiffrin model is ill-defined and does not correctly describe the Murdock free recall data

    OpenAIRE

    Tarnow, Dr. Eugen

    2009-01-01

    The Atkinson-Shiffrin (1968) model, the de facto standard model of short term memory cited thousands of times, fits the characteristically bowed free recall curves from Murdock (1962) well. However, it is long overdue to note that it is not a theoretically convincing explanation and that it does not fit all of the experimental relationships in the Murdock data. To obtain a qualitatively correct fit of the bowing I show that four model concepts have to work together. “Long term memory” is ...

  19. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  20. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  1. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  2. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  3. Study of market model describing the contrary behaviors of informed and uninformed agents: Being minority and being majority

    Science.gov (United States)

    Zhang, Yu-Xia; Liao, Hao; Medo, Matus; Shang, Ming-Sheng; Yeung, Chi Ho

    2016-05-01

    In this paper we analyze the contrary behaviors of the informed investors and uniformed investors, and then construct a competition model with two groups of agents, namely agents who intend to stay in minority and those who intend to stay in majority. We find two kinds of competitions, inter- and intra-groups. The model shows periodic fluctuation feature. The average distribution of strategies illustrates a prominent central peak which is relevant to the peak-fat-tail character of price change distribution in stock markets. Furthermore, in the modified model the tolerance time parameter makes the agents diversified. Finally, we compare the strategies distribution with the price change distribution in real stock market, and we conclude that contrary behavior rules and tolerance time parameter are indeed valid in the description of market model.

  4. Computer models for fading channels with applications to digital transmission

    Science.gov (United States)

    Loo, Chun; Secord, Norman

    1991-11-01

    The authors describe computer models for Rayleigh, Rician, log-normal, and land-mobile-satellite fading channels. All computer models for the fading channels are based on the manipulation of a white Gaussian random process. This process is approximated by a sum of sinusoids with random phase angle. These models compare very well with analytical models in terms of their probability distribution of envelope and phase of the fading signal. For the land mobile satellite fading channel, results of level crossing rate and average fade duration are given. These results show that the computer models can provide a good coarse estimate of the time statistic of the faded signal. Also, for the land-mobile-satellite fading channel, the results show that a 3-pole Butterworth shaping filter should be used with the model. An example of the application of the land-mobile-satellite fading-channel model to predict the performance of a differential phase-shift keying signal is described.

  5. Computer Modelling of Photochemical Smog Formation

    Science.gov (United States)

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  6. A Computational Model of Fraction Arithmetic

    Science.gov (United States)

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  7. Model Checking - Automated Verification of Computational Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Computer Modeling of Platinum Reforming Reactors | Momoh ...

    African Journals Online (AJOL)

    This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...

  9. Particle modeling of plasmas computational plasma physics

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1991-01-01

    Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world

  10. The inverse Gamma process: A family of continuous stochastic models for describing state-dependent deterioration phenomena

    International Nuclear Information System (INIS)

    Guida, M.; Pulcini, G.

    2013-01-01

    This paper proposes the family of non-stationary inverse Gamma processes for modeling state-dependent deterioration processes with nonlinear trend. The proposed family of processes, which is based on the assumption that the “inverse” time process is Gamma, is mathematically more tractable than previously proposed state-dependent processes, because, unlike the previous models, the inverse Gamma process is a time-continuous and state-continuous model and does not require discretization of time and state. The conditional distribution of the deterioration growth over a generic time interval, the conditional distribution of the residual life and the residual reliability of the unit, given the current state, are provided. Point and interval estimation of the parameters which index the proposed process, as well as of several quantities of interest, are also discussed. Finally, the proposed model is applied to the wear process of the liners of some Diesel engines which was previously analyzed and proved to be a purely state-dependent process. The comparison of the inferential results obtained under the competitor models shows the ability of the Inverse Gamma process to adequately model the observed state-dependent wear process

  11. A model for describing the eutrophication in a heavily regulated coastal lagoon. Application to the Albufera of Valencia (Spain).

    Science.gov (United States)

    del Barrio Fernández, Pilar; Gómez, Andrés García; Alba, Javier García; Díaz, César Álvarez; Revilla Cortezón, José Antonio

    2012-12-15

    A simplified two-dimensional eutrophication model was developed to simulate temporal and spatial variations of chlorophyll-a in heavily regulated coastal lagoons. This model considers the hydrodynamics of the whole study area, the regulated connexion of the lagoon with the sea, the variability of the input and output nutrient loads, the flux from the sediments to the water column, the phytoplankton growth and mortality kinetics, and the zooplankton grazing. The model was calibrated and validated by applying it to the Albufera of Valencia, a hypertrophic system whose connection to the sea is strongly regulated by a system of sluice-gates. The calibration and validation results presented a significant agreement between the model and the data obtained in several surveys. The accuracy was evaluated using a quantitative analysis, in which the average uncertainty of the model prediction was less than 6%. The results confirmed an expected phytoplankton bloom in April and October, achieving mean maximum values around 250 μg l(-1) of chlorophyll-a. A mass balance revealed that the eutrophication process is magnified by the input loads of nutrients, mainly from the sediments, as well as by the limited connection of the lagoon with the sea. This study has shown that the developed model is an efficient tool to manage the eutrophication problem in heavily regulated coastal lagoons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  13. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  14. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  15. A Physically Based Analytical Model to Describe Effective Excess Charge for Streaming Potential Generation in Water Saturated Porous Media

    Science.gov (United States)

    Guarracino, L.; Jougnot, D.

    2018-01-01

    Among the different contributions generating self-potential, the streaming potential is of particular interest in hydrogeology for its sensitivity to water flow. Estimating water flux in porous media using streaming potential data relies on our capacity to understand, model, and upscale the electrokinetic coupling at the mineral-solution interface. Different approaches have been proposed to predict streaming potential generation in porous media. One of these approaches is the flux averaging which is based on determining the excess charge which is effectively dragged in the medium by water flow. In this study, we develop a physically based analytical model to predict the effective excess charge in saturated porous media using a flux-averaging approach in a bundle of capillary tubes with a fractal pore size distribution. The proposed model allows the determination of the effective excess charge as a function of pore water ionic concentration and hydrogeological parameters like porosity, permeability, and tortuosity. The new model has been successfully tested against different set of experimental data from the literature. One of the main findings of this study is the mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by several researchers. The proposed model also highlights the link to other lithological properties, and it is able to reproduce the evolution of effective excess charge with electrolyte concentrations.

  16. Amicus Plato, sed magis amica veritas: plots must obey the laws they refer to and models shall describe biophysical reality!

    Science.gov (United States)

    Katkov, Igor I

    2011-06-01

    In the companion paper, we discussed in details proper linearization, calculation of the inactive osmotic volume, and analysis of the results on the Boyle-vant' Hoff plots. In this Letter, we briefly address some common errors and misconceptions in osmotic modeling and propose some approaches, namely: (1) inapplicability of the Kedem-Katchalsky formalism model in regards to the cryobiophysical reality, (2) calculation of the membrane hydraulic conductivity L(p) in the presence of permeable solutes, (3) proper linearization of the Arrhenius plots for the solute membrane permeability, (4) erroneous use of the term "toxicity" for the cryoprotective agents, and (5) advantages of the relativistic permeability approach (RP) developed by us vs. traditional ("classic") 2-parameter model. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. On one peculiarity of the model describing the interaction of the electron beam with the semiconductor surface

    Science.gov (United States)

    Stepovich, M. A.; Amrastanov, A. N.; Seregina, E. V.; Filippov, M. N.

    2018-01-01

    The problem of heat distribution in semiconductor materials irradiated with sharply focused electron beams in the absence of heat exchange between the target and the external medium is considered by mathematical modeling methods. For a quantitative description of energy losses by probe electrons a model based on a separate description of the contributions of absorbed in the target and backscattered electrons and applicable to a wide class of solids and a range of primary electron energies is used. Using the features of this approach, the nonmonotonic dependence of the temperature of the maximum heating in the target on the energy of the primary electrons is explained. Some modeling results are illustrated for semiconductor materials of electronic engineering.

  18. A physics-based crystallographic modeling framework for describing the thermal creep behavior of Fe-Cr alloys

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Wei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Capolungo, Laurent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patra, Anirban [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-02

    This Report addresses the Milestone M2MS-16LA0501032 of NEAMS Program (“Develop hardening model for FeCrAl cladding), with a deadline of 09/30/2016. Here we report a constitutive law for thermal creep of FeCrAl. This Report adds to and complements the one for Milestone M3MS-16LA0501034 (“Interface hardening models with MOOSE-BISON”), where we presented a hardening law for irradiated FeCrAl. The last component of our polycrystal-based constitutive behavior, namely, an irradiation creep model for FeCrAl, will be developed as part of the FY17 Milestones, and the three regimes will be coupled and interfaced with MOOSE-BISON.

  19. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  20. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  1. Evaluation of a cross contamination model describing transfer of Salmonella spp. and Listeria monocytogenes during grinding of pork and beef

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Sant'Ana, A.S.; Hansen, Solvej Katrine Holm

    2016-01-01

    A cross contamination model was challenged and evaluated applying a new approach.•QMRA and Total Transfer Potential (TTP) were included.•Transfer estimates were not applicable for unlike processing.•The risk of disease may be reduced when using a stainless steel grinder.•Well-sharpened knife, and...

  2. A Strategy for Describing the Biosphere at Candidate Sites for Repositories of Nuclear Waste: Linking Ecosystem and Landscape Modeling

    International Nuclear Information System (INIS)

    Lindborg, Tobias; Loefgren, Anders; Soederbaeck, Bjoern; Kautsky, Ulrik; Lindborg, Regina; Bradshaw, Clare

    2006-01-01

    To provide information necessary for a license application for a deep repository for spent nuclear fuel, the Swedish Nuclear Fuel and Waste Management Co. has started site investigations at two sites in Sweden. In this paper, we present a strategy to integrate site-specific ecosystem data into spatially explicit models needed for safety assessment studies and the environmental impact assessment. The site-specific description of ecosystems is developed by building discipline-specific models from primary data and by identifying interactions and stocks and flows of matter among functional units at the sites. The conceptual model is a helpful initial tool for defining properties needed to quantify system processes, which may reveal new interfaces between disciplines, providing a variety of new opportunities to enhance the understanding of the linkages between ecosystem characteristics and the functional properties of landscapes. This type of integrated ecosystem-landscape characterization model has an important role in forming the implementation of a safety assessment for a deep repository

  3. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  4. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...... attacker remain somehow undened and still under extensive investigation. This Thesis explores the nature of the ubiquitous attacker with a focus on how she interacts with the physical world and it denes a model that captures the abilities of the attacker. Furthermore a quantitative implementation...

  5. Application of the NDHA model to describe N2O dynamics in activated sludge mixed culture biomass

    DEFF Research Database (Denmark)

    Domingo-Felez, Carlos; Smets, Barth F.

    A pseudo-mechanistic model describing three biological nitric oxide (NO) and nitrous oxide (N2O) production pathways was calibrated for an activated sludge mixed culture biomass treating municipal wastewater with laboratory-scale experiments. The model (NDHA) comprehensively describes N2O produci...

  6. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  7. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  8. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  9. Performance of flash profile and napping with and without training for describing small sensory differences in a model wine

    DEFF Research Database (Denmark)

    Liu, Jing; Grønbeck, Marlene Schou; Di Monaco, Rosella

    2016-01-01

    . In this study different variations of two rapid sensory methods, one based on holistic assessment – Napping, and one based on attribute evaluation – Flash Profile, were tested for the evaluation of the flavour in wine. Model wines were developed with control over the sensory differences in terms of sensory...... to arrange samples on the sheet) or the product (familiarisation with the sensory properties of the wines) improved the outcome compared to the classical Napping protocol. The classical Flash Profile protocol and its modified version including a Napping with subsequent attributes generation as the word...... generation step and limiting the number of attributes for ranking gave a similar sample space. The Napping method could best highlight qualitative sample differences, whereas the Flash Profile provided a more precise product mapping on quantitative differences between model wines....

  10. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    OpenAIRE

    Degeling, Koen; IJzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by ...

  11. A Simple Model to Describe the Relationship among Rainfall, Groundwater and Land Subsidence under a Heterogeneous Aquifer

    Science.gov (United States)

    Zheng, Y. Y.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2017-12-01

    Land subsidence is a very serious problem of Zhuoshui River alluvial fan, Taiwan. The main reason of land subsidence is a compression of soil, but the compression measured in the wide area is very extensive (Maryam et al., 2013; Linlin et al., 2014). Chen et al. [2010] studied the linear relationship between groundwater level and subsurface altitude variations from Global Positioning System (GPS) station in Zhuoshui River alluvial fan. But the subsurface altitude data were only from two GPS stations. Their distributions are spared and small, not enough to express the altitude variations of Zhuoshui River alluvial fan. Hung et al. [2011] used Interferometry Synthetic Aperture Radar (InSAR) to measure the surface subsidence in Zhuoshui River alluvial fan, but haven't compared with groundwater level. The study compares the correlation between rainfall events and groundwater level and compares the correlation between groundwater level and subsurface altitude, these two correlation affected by heterogeneous soil. From these relationships, a numerical model is built to simulate the land subsidence variations and estimate the coefficient of aquifer soil compressibility. Finally, the model can estimate the long-term land subsidence. Keywords: Land Subsidence, InSAR, Groundwater Level, Numerical Model, Correlation Analyses

  12. Chemical reaction networks as a model to describe UVC- and radiolytically-induced reactions of simple compounds.

    Science.gov (United States)

    Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick

    2012-05-01

    When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.

  13. Capitalizing on Citizen Science Data for Validating Models and Generating Hypotheses Describing Meteorological Drivers of Mosquito-Borne Disease Risk

    Science.gov (United States)

    Boger, R. A.; Low, R.; Paull, S.; Anyamba, A.; Soebiyanto, R. P.

    2017-12-01

    Temperature and precipitation are important drivers of mosquito population dynamics, and a growing set of models have been proposed to characterize these relationships. Validation of these models, and development of broader theories across mosquito species and regions could nonetheless be improved by comparing observations from a global dataset of mosquito larvae with satellite-based measurements of meteorological variables. Citizen science data can be particularly useful for two such aspects of research into the meteorological drivers of mosquito populations: i) Broad-scale validation of mosquito distribution models and ii) Generation of quantitative hypotheses regarding changes to mosquito abundance and phenology across scales. The recently released GLOBE Observer Mosquito Habitat Mapper (GO-MHM) app engages citizen scientists in identifying vector taxa, mapping breeding sites and decommissioning non-natural habitats, and provides a potentially useful new tool for validating mosquito ubiquity projections based on the analysis of remotely sensed environmental data. Our early work with GO-MHM data focuses on two objectives: validating citizen science reports of Aedes aegypti distribution through comparison with accepted scientific data sources, and exploring the relationship between extreme temperature and precipitation events and subsequent observations of mosquito larvae. Ultimately the goal is to develop testable hypotheses regarding the shape and character of this relationship between mosquito species and regions.

  14. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  15. How Mathematics Describes Life

    Science.gov (United States)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for

  16. Incorporating NDVI in a gravity model setting to describe spatio-temporal patterns of Lyme borreliosis incidence

    Science.gov (United States)

    Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.

    2012-04-01

    Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed

  17. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    Science.gov (United States)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  18. Computer modelling for better diagnosis and therapy of patients by cardiac resynchronisation therapy

    NARCIS (Netherlands)

    Pluijmert, Marieke; Lumens, Joost; Potse, Mark; Delhaas, Tammo; Auricchio, Angelo; Prinzen, Frits W

    2015-01-01

    Mathematical or computer models have become increasingly popular in biomedical science. Although they are a simplification of reality, computer models are able to link a multitude of processes to each other. In the fields of cardiac physiology and cardiology, models can be used to describe the

  19. Advances in engineering turbulence modeling. [computational fluid dynamics

    Science.gov (United States)

    Shih, T.-H.

    1992-01-01

    Some new developments in two equation models and second order closure models are presented. In this paper, modified two equation models are proposed to remove shortcomings such as computing flows over complex geometries and the ad hoc treatment near the separation and reattachment points. The calculations using various two equation models are compared with direct numerical solutions of channel flows and flat plate boundary layers. Development of second order closure models will also be discussed with emphasis on the modeling of pressure related correlation terms and dissipation rates in the second moment equations. All existing models poorly predict the normal stresses near the wall and fail to predict the three dimensional effect of mean flow on the turbulence. The newly developed second order near-wall turbulence model to be described in this paper is capable of capturing the near-wall behavior of turbulence as well as the effect of three dimension mean flow on the turbulence.

  20. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  1. A simple model describing the nonlinear dynamics of the dusk/dawn asymmetry in the high-latitude thermospheric flow

    Science.gov (United States)

    Gundlach, J. P.; Larsen, M. F.; Mikkelsen, I. S.

    1988-01-01

    A simple nonlinear, axisymmetric, shallow-water numerical model has been used to study the asymmetry in the neutral flow between the dusk and dawn sides of the auroral oval. The results indicate that the Coriolis force and the curvature terms are nearly in balance on the evening side and require only a small pressure gradient to effect adjustment. The result is smaller neutral velocities near dawn and larger velocities near dusk than would be the case for a linearized treatment. A consequence is that more gravity wave energy is produced on the morning side than on the evening side.

  2. A multi-physics modelling framework to describe the behaviour of nano-scale multilayer systems undergoing irradiation damage

    International Nuclear Information System (INIS)

    Villani, Aurelien

    2015-01-01

    Radiation damage is known to lead to material failure and thus is of critical importance to lifetime and safety within nuclear reactors. While mechanical behaviour of materials under irradiation has been the subject of numerous studies, the current predictive capabilities of such phenomena appear limited. The clustering of point defects such as vacancies and self interstitial atoms gives rise to creep, void swelling and material embrittlement. Nano-scale metallic multilayer systems have be shown to have the ability to evacuate such point defects, hence delaying the occurrence of critical damage. In addition, they exhibit outstanding mechanical properties. The objective of this work is to develop a thermodynamically consistent continuum framework at the meso and nano-scales, which accounts for the major physical processes encountered in such metallic multilayer systems and is able to predict their microstructural evolution and behavior under irradiation. Mainly three physical phenomena are addressed in the present work: stress-diffusion coupling and diffusion induced creep, the void nucleation and growth in multilayer systems under irradiation, and the interaction of dislocations with the multilayer interfaces. In this framework, the microstructure is explicitly modeled, in order to account accurately for their effects on the system behavior. The diffusion creep strain rate is related to the gradient of the vacancy flux. A Cahn-Hilliard approach is used to model void nucleation and growth, and the diffusion equations for vacancies and self interstitial atoms are complemented to take into account the production of point defects due to irradiation cascades, the mutual recombination of defects and their evacuation through grain boundaries. In metallic multilayers, an interface affected zone is defined, with an additional slip plane to model the interface shearable character, and where dislocations cores are able to spread. The model is then implemented numerically

  3. Computational compliance criteria in water hammer modelling

    Directory of Open Access Journals (Sweden)

    Urbanowicz Kamil

    2017-01-01

    Full Text Available Among many numerical methods (finite: difference, element, volume etc. used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC, which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  4. Computational compliance criteria in water hammer modelling

    Science.gov (United States)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  5. Modeling electric fields in two dimensions using computer aided design

    International Nuclear Information System (INIS)

    Gilmore, D.W.; Giovanetti, D.

    1992-01-01

    The authors describe a method for analyzing static electric fields in two dimensions using AutoCAD. The algorithm is coded in LISP and is modeled after Coloumb's Law. The software platform allows for facile graphical manipulations of field renderings and supports a wide range of hardcopy-output and data-storage formats. More generally, this application is representative of the ability to analyze data that is the solution to known mathematical functions with computer aided design (CAD)

  6. A new multiscale model to describe a modified Hall-Petch relation at different scales for nano and micro materials

    Science.gov (United States)

    Fadhil, Sadeem Abbas; Alrawi, Aoday Hashim; Azeez, Jazeel H.; Hassan, Mohsen A.

    2018-04-01

    In the present work, a multiscale model is presented and used to modify the Hall-Petch relation for different scales from nano to micro. The modified Hall-Petch relation is derived from a multiscale equation that determines the cohesive energy between the atoms and their neighboring grains. This brings with it a new term that was originally ignored even in the atomistic models. The new term makes it easy to combine all other effects to derive one modified equation for the Hall-Petch relation that works for all scales together, without the need to divide the scales into two scales, each scale with a different equation, as it is usually done in other works. Due to that, applying the new relation does not require a previous knowledge of the grain size distribution. This makes the new derived relation more consistent and easier to be applied for all scales. The new relation is used to fit the data for Copper and Nickel and it is applied well for the whole range of grain sizes from nano to micro scales.

  7. Improved rigorous upper bounds for transport due to passive advection described by simple models of bounded systems

    International Nuclear Information System (INIS)

    Kim, Chang-Bae; Krommes, J.A.

    1988-08-01

    The work of Krommes and Smith on rigorous upper bounds for the turbulent transport of a passively advected scalar [/ital Ann. Phys./ 177:246 (1987)] is extended in two directions: (1) For their ''reference model,'' improved upper bounds are obtained by utilizing more sophisticated two-time constraints which include the effects of cross-correlations up to fourth order. Numerical solutions of the model stochastic differential equation are also obtained; they show that the new bounds compare quite favorably with the exact results, even at large Reynolds and Kubo numbers. (2) The theory is extended to take account of a finite spatial autocorrelation length L/sub c/. As a reasonably generic example, the problem of particle transport due to statistically specified stochastic magnetic fields in a collisionless turbulent plasma is revisited. A bound is obtained which reduces for small L/sub c/ to the quasilinear limit and for large L/sub c/ to the strong turbulence limit, and which provides a reasonable and rigorous interpolation for intermediate values of L/sub c/. 18 refs., 6 figs

  8. The Landau-Lifshitz equation describes the Ising spin correlation function in the free-fermion model

    CERN Document Server

    Rutkevich, S B

    1998-01-01

    We consider time and space dependence of the Ising spin correlation function in a continuous one-dimensional free-fermion model. By the Ising spin we imply the 'sign' variable, which takes alternating +-1 values in adjacent domains bounded by domain walls (fermionic world paths). The two-point correlation function is expressed in terms of the solution of the Cauchy problem for a nonlinear partial differential equation, which is proved to be equivalent to the exactly solvable Landau-Lifshitz equation. A new zero-curvature representation for this equation is presented. In turn, the initial condition for the Cauchy problem is given by the solution of a nonlinear ordinary differential equation, which has also been derived. In the Ising limit the above-mentioned partial and ordinary differential equations reduce to the sine-Gordon and Painleve III equations, respectively. (author)

  9. Model calculations of doubly closed shell nuclei in the integral-differential equation approach describing the two body correlations

    International Nuclear Information System (INIS)

    Brizzi, R.; Fabre de la Ripelle, M.; Lassaut, M.

    1999-01-01

    The binding energies and root mean square radii obtained from the Integro-Differential Equation Approach (IDEA) and from the Weight Function Approximation (WFA) of the IDEA for an even number of bosons and for 12 C, 16 O and 40 Ca are compared to those recently obtained by the Variational Monte Carlo, Fermi Hypernetted Chain and Coupled Cluster expansion method with model potentials. The IDEA provides numbers very similar to those obtained by other methods although it takes only two-body correlations into account. The analytical expression of the wave function for the WFA is given for bosons in ground state when the interaction pair is outside the potential range. Due to its simple structure, the equations of the IDEA can easily be extended to realistic interaction for nuclei like it has already been done for the tri-nucleon and the 4 He. (authors)

  10. Learned helplessness or expectancy-value? A psychological model for describing the experiences of different categories of unemployed people.

    Science.gov (United States)

    García Rodríguez, Y

    1997-06-01

    Various studies have explored the relationships between unemployment and expectation of success, commitment to work, motivation, causal attributions, self-esteem and depression. A model is proposed that assumes the relationships between these variables are moderated by (a) whether or not the unemployed individual is seeking a first job and (b) age. It is proposed that for the unemployed who are seeking their first job (seekers) the relationships among these variables will be consistent with expectancy-value theory, but for those who have had a previous job (losers), the relationships will be more consistent with learned helplessness theory. It is further assumed that within this latter group the young losers will experience "universal helplessness" whereas the adult losers will experience "personal helplessness".

  11. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption.

    Science.gov (United States)

    Midgley, Stewart; Schleich, Nanette

    2015-05-01

    A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV.

  12. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  13. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  14. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  15. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  16. Computational fluid dynamics modelling in cardiovascular medicine.

    Science.gov (United States)

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. Published by the BMJ Publishing Group Limited. For permission

  17. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  18. Computer Models in Biomechanics From Nano to Macro

    CERN Document Server

    Kuhl, Ellen

    2013-01-01

    This book contains a collection of papers that were presented at the IUTAM Symposium on “Computer Models in Biomechanics: From Nano to Macro” held at Stanford University, California, USA, from August 29 to September 2, 2011. It contains state-of-the-art papers on: - Protein and Cell Mechanics: coarse-grained model for unfolded proteins, collagen-proteoglycan structural interactions in the cornea, simulations of cell behavior on substrates - Muscle Mechanics: modeling approaches for Ca2+–regulated smooth muscle contraction, smooth muscle modeling using continuum thermodynamical frameworks, cross-bridge model describing the mechanoenergetics of actomyosin interaction, multiscale skeletal muscle modeling - Cardiovascular Mechanics: multiscale modeling of arterial adaptations by incorporating molecular mechanisms, cardiovascular tissue damage, dissection properties of aortic aneurysms, intracranial aneurysms, electromechanics of the heart, hemodynamic alterations associated with arterial remodeling followin...

  19. Development of a model to describe organic films on aerosol particles and cloud droplets. Final report; Entwicklung eines Modells zur Beschreibung organischer Filme auf Aerosolteilchen und Wolkentropfen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Forkel, R. (ed.); Seidl, W.

    2000-12-01

    Organic substances with polar groups are enriched on water surfaces and can form monomolecular surface films which can reduce the surface tension. A new model to describe surface films is presented, which describes in detail the film forming properties of fatty acids with up to 22 carbon atoms. The model is applied to measured concentrations of fatty acids (from the literature) in rain water and on aerosol particles and cloud droplets. An investigation of the sources of fatty acids has shown, that abrasion of the wax layer on leaves and needles is the main sources for surface film material in the western USA. Anthropogenic sources in urban areas are meat preparation and cigarette smoke. The agreement between model results and measurements when the model was applied to rain water confirms the original assumption that fatty acids are a main compound of surface films in rain water. For humid aerosol particles the application of the model on measured concentrations of fatty acids only showed strongly diluted films. Only for remote forest areas in western USA concentrated films were found, with the surface tension reduced by 20 to 30%. On cloud droplets the surface films is still more diluted than on aerosol particles. For all investigated cases the films was too much diluted to have an effect on the activation process of cloud droplets. (orig.) [German] Organische Substanzen mit polaren Gruppen reichern sich an der Wasseroberflaeche an und koennen monomolekulare Oberflaechenfilme bilden, die zu einer Verringerung der Oberflaechenspannung fuehren. Es wird ein neues Modell zur Beschreibung eines Oberflaechenfilms beschrieben, das detailliert die filmbildenden Eigenschaften der Fettsaeuren mit bis zu 22 Kohlenstoffatomen erfasst. Dieses Modell ist auf gemessene Konzentrationen von Fettsaeuren (Literaturdaten) in Regenwasser und auf atmosphaerischen Aerosolteilchen und Wolkentropfen angewandt worden. Eine Betrachtung der Quellen der Fettsaeuren zeigte, dass der Abrieb der

  20. Phenomenological optical potentials and optical model computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    An introduction to the Optical Model is presented. Starting with the purpose and nature of the physical problems to be analyzed, a general formulation and the various phenomenological methods of solution are discussed. This includes the calculation of observables based on assumed potentials such as local and non-local and their forms, e.g. Woods-Saxon, folded model etc. Also discussed are the various calculational methods and model codes employed to describe nuclear reactions in the spherical and deformed regions (e.g. coupled-channel analysis). An examination of the numerical solutions and minimization techniques associated with the various codes, is briefly touched upon. Several computer programs are described for carrying out the calculations. The preparation of input, (formats and options), determination of model parameters and analysis of output are described. The class is given a series of problems to carry out using the available computer. Interpretation and evaluation of the samples includes the effect of varying parameters, and comparison of calculations with the experimental data. Also included is an intercomparison of the results from the various model codes, along with their advantages and limitations. (author)

  1. Chaotic-Dynamical Conceptual Model to Describe Fluid Flow and Contaminant Transport in a Fractured Vadose Zone

    International Nuclear Information System (INIS)

    Faybishenko, Boris; Doughty, Christine; Geller, Jil T.

    1999-01-01

    DOE faces the remediation of numerous contaminated sites, such as those at Hanford, INEEL, LLNL, and LBNL, where organic and/or radioactive wastes were intentionally or accidentally released to the vadose zone from surface spills, underground tanks, cribs, shallow ponds, and deep wells. Migration of these contaminants through the vadose zone has led to the contamination of (or threatens to contaminate) underlying groundwater. A key issue in choosing a corrective action plan to clean up contaminated sites is the determination of the location, total mass, mobility and travel time to receptors for contaminants moving in the vadose zone. These problems are difficult to solve in a technically defensible and accurate manner because contaminants travel downward intermittently, through narrow pathways, driven by variations in environmental conditions. These preferential flow pathways can be difficult to find and predict. The primary objective of this project is to determine if and when dynamical chaos theory can be used to investigate infiltration of fluid and contaminant transport in heterogeneous soils and fractured rocks. The objective of this project is being achieved through the following activities: Development of multi scale conceptual models and mathematical and numerical algorithms for flow and transport, which incorporate both (a) the spatial variability of heterogeneous porous and fractured media and (b) the temporal dynamics of flow and transport; Development of appropriate experimental field and laboratory techniques needed to detect diagnostic parameters for chaotic behavior of flow; Evaluation of chaotic behavior of flow in laboratory and field experiments using methods from non-linear dynamics; Evaluation of the impact these dynamics may have on contaminant transport through heterogeneous fractured rocks and soils and remediation efforts. This approach is based on the consideration of multi scale spatial heterogeneity and flow phenomena that are affected by

  2. A plant wide aqueous phase chemistry model describing pH variations and ion speciation/pairing in wastewater treatment process models

    DEFF Research Database (Denmark)

    Flores-Alsina, X.; Mbamba, C. Kazadi; Solon, K.

    cationic/anionic loads. In this way, the general applicability/flexibility of the proposed approach is demonstrated by implementing the aqueous phase chemistry module in some of the most frequently used WWTP process simulation models. Finally, it is shown how traditional wastewater modelling studies can......, require a major, but unavoidable, additional degree of complexity when representing cationic/anionic behaviour in Activated Sludge (AS)/Anaerobic Digestion (AD) systems (Ikumi et al., 2014). In this paper, a plant-wide aqueous phase chemistry module describing pH variations plus ion speciation...... of Ordinary Differential Equations (ODEs) in order to reduce the overall stiffness of the system, thereby enhancing simulation speed. Additionally, a multi-dimensional version of the Newton-Raphson algorithm is applied to handle the existing multiple algebraic inter-dependencies (Solon et al., 2015...

  3. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  4. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  5. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  6. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  7. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    Science.gov (United States)

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  8. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  9. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-11-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  10. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-03-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  11. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  12. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  13. Computational Models for Calcium-Mediated Astrocyte Functions.

    Science.gov (United States)

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  14. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  15. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  16. Electricity load modelling using computational intelligence

    NARCIS (Netherlands)

    Ter Borg, R.W.

    2005-01-01

    As a consequence of the liberalisation of the electricity markets in Europe, market players have to continuously adapt their future supply to match their customers' demands. This poses the challenge of obtaining a predictive model that accurately describes electricity loads, current in this thesis.

  17. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  18. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  19. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  20. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  1. Computer models in the design of FXR

    International Nuclear Information System (INIS)

    Vogtlin, G.; Kuenning, R.

    1980-01-01

    Lawrence Livermore National Laboratory is developing a 15 to 20 MeV electron accelerator with a beam current goal of 4 kA. This accelerator will be used for flash radiography and has a requirement of high reliability. Components being developed include spark gaps, Marx generators, water Blumleins and oil insulation systems. A SCEPTRE model was developed that takes into consideration the non-linearity of the ferrite and the time dependency of the emission from a field emitter cathode. This model was used to predict an optimum charge time to obtain maximum magnetic flux change from the ferrite. This model and its application will be discussed. JASON was used extensively to determine optimum locations and shapes of supports and insulators. It was also used to determine stress within bubbles adjacent to walls in oil. Computer results will be shown and bubble breakdown will be related to bubble size

  2. Computational fluid dynamic modelling of cavitation

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  3. A computer model for hydride blister growth in zirconium alloys

    International Nuclear Information System (INIS)

    White, A.J.; Sawatzky, A.; Woo, C.H.

    1985-06-01

    The failure of a Zircaloy-2 pressure tube in the Pickering unit 2 reactor started at a series of zirconium hydride blisters on the outside of the pressure tube. These blisters resulted from the thermal diffusion of hydrogen to the cooler regions of the pressure tube. In this report the physics of thermal diffusion of hydrogen in zirconium is reviewed and a computer model for blister growth in two-dimensional Cartesian geometry is described. The model is used to show that the blister-growth rate in a two-phase zirconium/zirconium-hydride region does not depend on the initial hydrogen concentration nor on the hydrogen pick-up rate, and that for a fixed far-field temperature there is an optimum pressure-type/calandria-tube contact temperature for growing blisters. The model described here can also be used to study large-scale effects, such as hydrogen-depletion zones around hydride blisters

  4. Computer-controlled mechanical lung model for application in pulmonary function studies

    NARCIS (Netherlands)

    A.F.M. Verbraak (Anton); J.E.W. Beneken; J.M. Bogaard (Jan); A. Versprille (Adrian)

    1995-01-01

    textabstractA computer controlled mechanical lung model has been developed for testing lung function equipment, validation of computer programs and simulation of impaired pulmonary mechanics. The construction, function and some applications are described. The physical model is constructed from two

  5. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  6. Computational model of a whole tree combustor

    Energy Technology Data Exchange (ETDEWEB)

    Bryden, K.M.; Ragland, K.W. [Univ. of Wisconsin, Madison, WI (United States)

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  7. Incorporation of FcRn-mediated disposition model to describe the population pharmacokinetics of therapeutic monoclonal IgG antibody in clinical patients.

    Science.gov (United States)

    Ng, Chee M

    2016-03-01

    The two-compartment linear model used to describe the population pharmacokinetics (PK) of many therapeutic monoclonal antibodies (TMAbs) offered little biological insight to antibody disposition in humans. The purpose of this study is to develop a semi-mechanistic FcRn-mediated IgG disposition model to describe the population PK of TMAbs in clinical patients. A standard two-compartment linear PK model from a previously published population PK model of pertuzumab was used to simulate intensive PK data of 100 subjects for model development. Two different semi-mechanistic FcRn-mediated IgG disposition models were developed and First Order Conditional Estimation (FOCE) with the interaction method in NONMEM was used to obtain the final model estimates. The performances of these models were then compared with the two-compartment linear PK model used to simulate the data for model development. A semi-mechanistic FcRn-mediated IgG disposition model consisting of a peripheral tissue compartment and FcRn-containing endosomes in the central compartment best describes the simulated pertuzumab population PK data. This developed semi-mechanistic population PK model had the same number of model parameters, produced very similar concentration-time profiles but provided additional biological insight to the FcRn-mediated IgG disposition in human subjects compared with the standard linear two-compartment linear PK model. This first reported semi-mechanistic model may serve as an important model framework for developing future population PK models of TMAbs in clinical patients. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Protein adsorption on nanoparticles: model development using computer simulation

    International Nuclear Information System (INIS)

    Shao, Qing; Hall, Carol K

    2016-01-01

    The adsorption of proteins on nanoparticles results in the formation of the protein corona, the composition of which determines how nanoparticles influence their biological surroundings. We seek to better understand corona formation by developing models that describe protein adsorption on nanoparticles using computer simulation results as data. Using a coarse-grained protein model, discontinuous molecular dynamics simulations are conducted to investigate the adsorption of two small proteins (Trp-cage and WW domain) on a model nanoparticle of diameter 10.0 nm at protein concentrations ranging from 0.5 to 5 mM. The resulting adsorption isotherms are well described by the Langmuir, Freundlich, Temkin and Kiselev models, but not by the Elovich, Fowler–Guggenheim and Hill–de Boer models. We also try to develop a generalized model that can describe protein adsorption equilibrium on nanoparticles of different diameters in terms of dimensionless size parameters. The simulation results for three proteins (Trp-cage, WW domain, and GB3) on four nanoparticles (diameter  =  5.0, 10.0, 15.0, and 20.0 nm) illustrate both the promise and the challenge associated with developing generalized models of protein adsorption on nanoparticles. (paper)

  9. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  10. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  11. Comparison of different two-pathway models for describing the combined effect of DO and nitrite on the nitrous oxide production by ammonia-oxidizing bacteria.

    Science.gov (United States)

    Lang, Longqi; Pocquet, Mathieu; Ni, Bing-Jie; Yuan, Zhiguo; Spérandio, Mathieu

    2017-02-01

    The aim of this work is to compare the capability of two recently proposed two-pathway models for predicting nitrous oxide (N 2 O) production by ammonia-oxidizing bacteria (AOB) for varying ranges of dissolved oxygen (DO) and nitrite. The first model includes the electron carriers whereas the second model is based on direct coupling of electron donors and acceptors. Simulations are confronted to extensive sets of experiments (43 batches) from different studies with three different microbial systems. Despite their different mathematical structures, both models could well and similarly describe the combined effect of DO and nitrite on N 2 O production rate and emission factor. The model-predicted contributions for nitrifier denitrification pathway and hydroxylamine pathway also matched well with the available isotopic measurements. Based on sensitivity analysis, calibration procedures are described and discussed for facilitating the future use of those models.

  12. Getting computer models to communicate; Faire communiquer les modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, Ch. [Electricite de France (EDF), 75 - Paris (France). Dept. Mecanique et Modeles Numeriques; Erhard, P. [Electricite de France (EDF), 75 - Paris (France). Dept. Physique des Reacteurs

    1999-07-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  13. Computational Models for Analysis of Illicit Activities

    DEFF Research Database (Denmark)

    Nizamani, Sarwat

    been explored in this thesis by considering them as epidemic-like processes. A mathematical model has been developed based on differential equations, which studies the dynamics of the issues from the very beginning until the issues cease. This study extends classical models of the spread of epidemics...... to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...

  14. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  15. Modeling Reality: How Computers Mirror Life

    International Nuclear Information System (INIS)

    Inoue, J-I

    2005-01-01

    Modeling Reality: How Computers Mirror Life covers a wide range of modern subjects in complex systems, suitable not only for undergraduate students who want to learn about modelling 'reality' by using computer simulations, but also for researchers who want to learn something about subjects outside of their majors and need a simple guide. Readers are not required to have specialized training before they start the book. Each chapter is organized so as to train the reader to grasp the essential idea of simulating phenomena and guide him/her towards more advanced areas. The topics presented in this textbook fall into two categories. The first is at graduate level, namely probability, statistics, information theory, graph theory, and the Turing machine, which are standard topics in the course of information science and information engineering departments. The second addresses more advanced topics, namely cellular automata, deterministic chaos, fractals, game theory, neural networks, and genetic algorithms. Several topics included here (neural networks, game theory, information processing, etc) are now some of the main subjects of statistical mechanics, and many papers related to these interdisciplinary fields are published in Journal of Physics A: Mathematical and General, so readers of this journal will be familiar with the subject areas of this book. However, each area is restricted to an elementary level and if readers wish to know more about the topics they are interested in, they will need more advanced books. For example, on neural networks, the text deals with the back-propagation algorithm for perceptron learning. Nowadays, however, this is a rather old topic, so the reader might well choose, for example, Introduction to the Theory of Neural Computation by J Hertz et al (Perseus books, 1991) or Statistical Physics of Spin Glasses and Information Processing by H Nishimori (Oxford University Press, 2001) for further reading. Nevertheless, this book is worthwhile

  16. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  17. A computational model of motor neuron degeneration.

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  19. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  20. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  1. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  2. Computer Modeling of Radiation Portal Monitors for Homeland Security Applications

    International Nuclear Information System (INIS)

    Pagh, Richard T.; Kouzes, Richard T.; McConn, Ronald J.; Robinson, Sean M.; Schweppe, John E.; Siciliano, Edward R.

    2005-01-01

    Radiation Portal Monitors (RPMs) are currently being used at our nation's borders to detect potential nuclear threats. At the Pacific Northwest National Laboratory (PNNL), realistic computer models of RPMs are being developed to simulate the screening of vehicles and cargo. Detailed models of the detection equipment, vehicles, cargo containers, cargos, and radioactive sources are being used to determine the optimal configuration of detectors. These models can also be used to support work to optimize alarming algorithms so that they maximize sensitivity for items of interest while minimizing nuisance alarms triggered by legitimate radioactive material in the commerce stream. Proposed next-generation equipment is also being modeled to quantify performance and capability improvements to detect potential nuclear threats. A discussion of the methodology used to perform computer modeling for RPMs will be provided. In addition, the efforts to validate models used to perform these scenario analyses will be described. Finally, areas where improved modeling capability is needed will be discussed as a guide to future development efforts

  3. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  4. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  5. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  6. Development and Application of a Category System to Describe Pre-Service Science Teachers' Activities in the Process of Scientific Modelling

    Science.gov (United States)

    Krell, Moritz; Walzer, Christine; Hergert, Susann; Krüger, Dirk

    2017-09-01

    As part of their professional competencies, science teachers need an elaborate meta-modelling knowledge as well as modelling skills in order to guide and monitor modelling practices of their students. However, qualitative studies about (pre-service) science teachers' modelling practices are rare. This study provides a category system which is suitable to analyse and to describe pre-service science teachers' modelling activities and to infer modelling strategies. The category system was developed based on theoretical considerations and was inductively refined within the methodological frame of qualitative content analysis. For the inductive refinement, modelling practices of pre-service teachers (n = 4) have been video-taped and analysed. In this study, one case was selected to demonstrate the application of the category system to infer modelling strategies. The contribution of this study for science education research and science teacher education is discussed.

  7. Sensitivity analysis of a radionuclide transfer model describing contaminated vegetation in Fukushima prefecture, using Morris and Sobol' - Application of sensitivity analysis on a radionuclides transfer model in the environment describing weeds contamination in Fukushima Prefecture, using Morris method and Sobol' indices indices

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Metivier, J.M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Garcia-Sanchez, L. [Institut de Radioprotection et de Surete Nucleaire-PRPENV/SERIS/L2BT (France)

    2014-07-01

    The increasing spatial and temporal complexity of models demands methods capable of ranking the influence of their large numbers of parameters. This question specifically arises in assessment studies on the consequences of the Fukushima accident. Sensitivity analysis aims at measuring the influence of input variability on the output response. Generally, two main approaches are distinguished (Saltelli, 2001, Iooss, 2011): - Screening approach, less expensive in computation time and allowing to identify non influential parameters; - Measures of importance, introducing finer quantitative indices. In this category, there are regression-based methods, assuming a linear or monotonic response (Pearson coefficient, Spearman coefficient), and variance-based methods, without assumptions on the model but requiring an increasingly prohibitive number of evaluations when the number of parameters increases. These approaches are available in various statistical programs (notably R) but are still poorly integrated in modelling platforms of radioecological risk assessment. This work aimed at illustrating the benefits of sensitivity analysis in the course of radioecological risk assessments This study used two complementary state-of-art global sensitivity analysis methods: - The screening method of Morris (Morris, 1991; Campolongo et al., 2007) based on limited model evaluations with a one-at-a-time (OAT) design; - The variance-based Sobol' sensitivity analysis (Saltelli, 2002) based a large number of model evaluations in the parameter space with a quasi-random sampling (Owen, 2003). Sensitivity analyses were applied on a dynamic Soil-Plant Deposition Model (Gonze et al., submitted to this conference) predicting foliar concentration in weeds after atmospheric radionuclide fallout. The Soil-Plant Deposition Model considers two foliage pools and a root pool, and describes foliar biomass growth with a Verhulst model. The developed semi-analytic formulation of foliar concentration

  8. A model framework to describe growth-linked biodegradation of trace-level pesticides in the presence of coincidental carbon substrates and microbes

    DEFF Research Database (Denmark)

    Liu, Li; Helbling, Damian E.; Kohler, Hans-Peter E.

    2014-01-01

    described were: the growth-linked biodegradation of micropollutant at environmentally relevant concentrations; the effect of coincidental assimilable organic carbon substrates; and the effect of coincidental microbes that compete for assimilable organic carbon substrates. We used Monod kinetic models...... to describe substrate utilization and microbial growth rates for specific pesticide and degrader pairs. We then extended the model to include terms for utilization of assimilable organic carbon substrates by the specific degrader and coincidental microbes, growth on assimilable organic carbon substrates......, challenges remain in developing engineered remediation strategies for pesticide-contaminated environments because the fundamental processes that regulate growth-linked biodegradation of pesticides in natural environments remain poorly understood. In this research, we developed a model framework to describe...

  9. Computer Models for IRIS Control System Transient Analysis

    International Nuclear Information System (INIS)

    Gary D Storrick; Bojan Petrovic; Luca Oriani

    2007-01-01

    This report presents results of the Westinghouse work performed under Task 3 of this Financial Assistance Award and it satisfies a Level 2 Milestone for the project. Task 3 of the collaborative effort between ORNL, Brazil and Westinghouse for the International Nuclear Energy Research Initiative entitled 'Development of Advanced Instrumentation and Control for an Integrated Primary System Reactor' focuses on developing computer models for transient analysis. This report summarizes the work performed under Task 3 on developing control system models. The present state of the IRIS plant design--such as the lack of a detailed secondary system or I and C system designs--makes finalizing models impossible at this time. However, this did not prevent making considerable progress. Westinghouse has several working models in use to further the IRIS design. We expect to continue modifying the models to incorporate the latest design information until the final IRIS unit becomes operational. Section 1.2 outlines the scope of this report. Section 2 describes the approaches we are using for non-safety transient models. It describes the need for non-safety transient analysis and the model characteristics needed to support those analyses. Section 3 presents the RELAP5 model. This is the highest-fidelity model used for benchmark evaluations. However, it is prohibitively slow for routine evaluations and additional lower-fidelity models have been developed. Section 4 discusses the current Matlab/Simulink model. This is a low-fidelity, high-speed model used to quickly evaluate and compare competing control and protection concepts. Section 5 describes the Modelica models developed by POLIMI and Westinghouse. The object-oriented Modelica language provides convenient mechanisms for developing models at several levels of detail. We have used this to develop a high-fidelity model for detailed analyses and a faster-running simplified model to help speed the I and C development process. Section

  10. LHCb: The Evolution of the LHCb Grid Computing Model

    CERN Multimedia

    Arrabito, L; Bouvet, D; Cattaneo, M; Charpentier, P; Clarke, P; Closier, J; Franchini, P; Graciani, R; Lanciotti, E; Mendez, V; Perazzini, S; Nandkumar, R; Remenska, D; Roiser, S; Romanovskiy, V; Santinelli, R; Stagni, F; Tsaregorodtsev, A; Ubeda Garcia, M; Vedaee, A; Zhelezov, A

    2012-01-01

    The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. Taking advantage of these changed conditions, LHCb ran with a higher pileup as well as a much larger charm physics introducing a bigger event size and processing times. These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\\% and subsequently problems with data throughput for file access from the storage elements. To accommodate these changes the online running conditions and the Computing Model for offline data processing had to be adapted accordingly. This paper describes the changes implemented for the offline data processing on the Grid, relaxing the Monarc model in a first step and going beyond it subsequently. It further describes other operational issues discovered and solved during 2011, present the ...

  11. Computational modeling of ultra-short-pulse ablation of enamel

    Energy Technology Data Exchange (ETDEWEB)

    London, R.A.; Bailey, D.S.; Young, D.A. [and others

    1996-02-29

    A computational model for the ablation of tooth enamel by ultra-short laser pulses is presented. The role of simulations using this model in designing and understanding laser drilling systems is discussed. Pulses of duration 300 sec and intensity greater than 10{sup 12} W/cm{sup 2} are considered. Laser absorption proceeds via multi-photon initiated plasma mechanism. The hydrodynamic response is calculated with a finite difference method, using an equation of state constructed from thermodynamic functions including electronic, ion motion, and chemical binding terms. Results for the ablation efficiency are presented. An analytic model describing the ablation threshold and ablation depth is presented. Thermal coupling to the remaining tissue and long-time thermal conduction are calculated. Simulation results are compared to experimental measurements of the ablation efficiency. Desired improvements in the model are presented.

  12. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  13. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  14. A functional language for describing reversible logic

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal

    2012-01-01

    Reversible logic is a computational model where all gates are logically reversible and combined in circuits such that no values are lost or duplicated. This paper presents a novel functional language that is designed to describe only reversible logic circuits. The language includes high....... Reversibility of descriptions is guaranteed with a type system based on linear types. The language is applied to three examples of reversible computations (ALU, linear cosine transformation, and binary adder). The paper also outlines a design flow that ensures garbage- free translation to reversible logic...... circuits. The flow relies on a reversible combinator language as an intermediate language....

  15. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  16. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  17. Towards GLUE 2: evolution of the computing element information model

    International Nuclear Information System (INIS)

    Andreozzi, S; Burke, S; Field, L; Konya, B

    2008-01-01

    A key advantage of Grid systems is the ability to share heterogeneous resources and services between traditional administrative and organizational domains. This ability enables virtual pools of resources to be created and assigned to groups of users. Resource awareness, the capability of users or user agents to have knowledge about the existence and state of resources, is required in order utilize the resource. This awareness requires a description of the services and resources typically defined via a community-agreed information model. One of the most popular information models, used by a number of Grid infrastructures, is the GLUE Schema, which provides a common language for describing Grid resources. Other approaches exist, however they follow different modeling strategies. The presence of different flavors of information models for Grid resources is a barrier for enabling inter-Grid interoperability. In order to solve this problem, the GLUE Working Group in the context of the Open Grid Forum was started. The purpose of the group is to oversee a major redesign of the GLUE Schema which should consider the successful modeling choices and flaws that have emerged from practical experience and modeling choices from other initiatives. In this paper, we present the status of the new model for describing computing resources as the first output from the working group with the aim of dissemination and soliciting feedback from the community

  18. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  19. Computer modelling of statistical properties of SASE FEL radiation

    International Nuclear Information System (INIS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1997-01-01

    The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY

  20. Computational numerical modelling of plasma focus

    International Nuclear Information System (INIS)

    Brollo, Fabricio

    2005-01-01

    Several models for calculation of the dynamics of Plasma Focus have been developed. All of them begin from the same physic principle: the current sheet run down the anode length, ionizing and collecting the gas that finds in its way.This is known as snow-plow model.Concerning pinch's compression, a MHD model is proposed.The plasma is treated as a fluid , particularly as a high ionized gas.However, there are not many models that, taking into account thermal equilibrium inside the plasma, make approximated calculations of the maximum temperatures reached in the pinch.Besides, there are no models which use those temperatures to estimate the termofusion neutron yield for the Deuterium or Deuterium-Tritium gas filled cases.In the PLADEMA network (Dense Magnetized Plasmas) a code was developed with the objective of describe the plasma focus dynamics, in a conceptual engineering stage.The codes calculates the principal variables (currents, time to focus, etc) and estimates the neutron yield in Deuterium-filled plasma focus devices.It can be affirmed that the code's experimental validation, in its axial and radial stages, was very successfully. However, it was accepted that the compression stage should be formulated again, to find a solution for a large variation of a parameter related with velocity profiles for the particles trapped inside the pinch.The objectives of this work can be stated in the next way : - Check the compression's model hypothesis. Develop a new model .- Implement the new model in the code. Compare results against experimental data of Plasma Focus devices from all around the world [es