WorldWideScience

Sample records for plausible computational model

  1. Bisimulation for Single-Agent Plausibility Models

    DEFF Research Database (Denmark)

    Andersen, Mikkel Birkegaard; Bolander, Thomas; van Ditmarsch, H.;

    2013-01-01

    Epistemic plausibility models are Kripke models agents use to reason about the knowledge and beliefs of themselves and each other. Restricting ourselves to the single-agent case, we determine when such models are indistinguishable in the logical language containing conditional belief, i.e., we...... define a proper notion of bisimulation, and prove that bisimulation corresponds to logical equivalence on image-finite models. We relate our results to other epistemic notions, such as safe belief and degrees of belief. Our results imply that there are only finitely many non-bisimilar single......-agent epistemic plausibility models on a finite set of propositions. This gives decidability for single-agent epistemic plausibility planning....

  2. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model

    Science.gov (United States)

    Aberg, Kristoffer C.; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  3. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  4. Analytic Models of Plausible Gravitational Lens Potentials

    Energy Technology Data Exchange (ETDEWEB)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2007-05-04

    Gravitational lenses on galaxy scales are plausibly modeled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sersic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasizing that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential.We also provide analytic formulae for the lens potentials of Sersic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modeled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses.

  5. Some Remarks on the Model Theory of Epistemic Plausibility Models

    CERN Document Server

    Demey, Lorenz

    2010-01-01

    Classical logics of knowledge and belief are usually interpreted on Kripke models, for which a mathematically well-developed model theory is available. However, such models are inadequate to capture dynamic phenomena. Therefore, epistemic plausibility models have been introduced. Because these are much richer structures than Kripke models, they do not straightforwardly inherit the model-theoretical results of modal logic. Therefore, while epistemic plausibility structures are well-suited for modeling purposes, an extensive investigation of their model theory has been lacking so far. The aim of the present paper is to fill exactly this gap, by initiating a systematic exploration of the model theory of epistemic plausibility models. Like in 'ordinary' modal logic, the focus will be on the notion of bisimulation. We define various notions of bisimulations (parametrized by a language L) and show that L-bisimilarity implies L-equivalence. We prove a Hennesy-Milner type result, and also two undefinability results. ...

  6. Plausible cloth animation using dynamic bending model

    Institute of Scientific and Technical Information of China (English)

    Chuan Zhou; Xiaogang Jin; Charlie C.L. Wang; Jieqing Feng

    2008-01-01

    Simulating the mechanical behavior of a cloth is a very challenging and important problem in computer animation. The models of bending in most existing cloth simulation approaches are taking the assumption that the cloth is little deformed from a plate shape.Therefore, based on the thin-plate theory, these bending models do not consider the condition that the current shape of the cloth under large deformations cannot be regarded as the approximation to that before deformation, which leads to an unreal static bending. [This paper introduces a dynamic bending model which is appropriate to describe large out-plane deformations such as cloth buckling and bending, and develops a compact implementation of the new model on spring-mass systems. Experimental results show that wrinkles and folds generated using this technique in cloth simulation, can appear and vanish in a more natural way than other approaches.

  7. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation

    National Research Council Canada - National Science Library

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility...

  8. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  9. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  10. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition.

  11. Don't Plan for the Unexpected: Planning Based on Plausibility Models

    DEFF Research Database (Denmark)

    Andersen, Mikkel Birkegaard; Bolander, Thomas; Jensen, Martin Holm

    2015-01-01

    We present a framework for automated planning based on plausibility models, as well as algorithms for computing plans in this framework. Our plausibility models include postconditions, as ontic effects are essential for most planning purposes. The framework presented extends a previously developed...... framework based on dynamic epistemic logic (DEL), without plausibilities/beliefs. In the pure epistemic framework, one can distinguish between strong and weak epistemic plans for achieving some, possibly epistemic, goal. By taking all possible outcomes of actions into account, a strong plan guarantees...... that the agent achieves this goal. Conversely, a weak plan promises only the possibility of leading to the goal. In real-life planning scenarios where the planning agent is faced with a high degree of uncertainty and an almost endless number of possible exogenous events, strong epistemic planning...

  12. Don't Plan for the Unexpected: Planning Based on Plausibility Models

    DEFF Research Database (Denmark)

    Andersen, Mikkel Birkegaard; Bolander, Thomas; Jensen, Martin Holm

    2015-01-01

    that the agent achieves this goal. Conversely, a weak plan promises only the possibility of leading to the goal. In real-life planning scenarios where the planning agent is faced with a high degree of uncertainty and an almost endless number of possible exogenous events, strong epistemic planning......We present a framework for automated planning based on plausibility models, as well as algorithms for computing plans in this framework. Our plausibility models include postconditions, as ontic effects are essential for most planning purposes. The framework presented extends a previously developed...... framework based on dynamic epistemic logic (DEL), without plausibilities/beliefs. In the pure epistemic framework, one can distinguish between strong and weak epistemic plans for achieving some, possibly epistemic, goal. By taking all possible outcomes of actions into account, a strong plan guarantees...

  13. A biologically plausible embodied model of action discovery

    Directory of Open Access Journals (Sweden)

    Rufino eBolado-Gomez

    2013-03-01

    Full Text Available During development, animals can spontaneously discover action-outcomepairings enabling subsequent achievement of their goals. We present abiologically plausible embodied model addressing key aspects of thisprocess. The biomimetic model core comprises the basal ganglia and itsloops through cortex and thalamus. We incorporate reinforcementlearning with phasic dopamine supplying a sensory prediction error,signalling 'surprising' outcomes. Phasic dopamine is used in acorticostriatal learning rule which is consistent with recent data. Wealso hypothesised that objects associated with surprising outcomesacquire 'novelty salience' contingent on the predicability of theoutcome. To test this idea we used a simple model of predictiongoverning the dynamics of novelty salience and phasic dopamine. Thetask of the virtual robotic agent mimicked an in vivo counterpart(Gancarz et al., 2011 and involved interaction with a target objectwhich caused a light flash, or a control object which did not.Learning took place according to two schedules. In one, the phasicoutcome was delivered after interaction with the target in anunpredictable way which emulated the in vivo protocol. Without noveltysalience, the model was unable to account for the experimental data.In the other schedule, the phasic outcome was reliably delivered andthe agent showed a rapid increase in the number of interactions withthe target which then decreased over subsequent sessions. We arguethis is precisely the kind of change in behaviour required torepeatedly present representations of context, action and outcome, toneural networks responsible for learning action-outcome contingency.The model also showed corticostriatal plasticity consistent withlearning a new action in basal ganglia. We conclude that actionlearning is underpinned by a complex interplay of plasticity andstimulus salience, and that our model contains many of the elementsfor biological action discovery to take place.

  14. Design of a neurally plausible model of fear learning

    Directory of Open Access Journals (Sweden)

    Franklin B. Krasne

    2011-07-01

    Full Text Available A neurally oriented conceptual and computational model of fear conditioning ("Fraidy Rat" or FRAT has been constructed that accounts for many aspects of delay and context conditioning. Conditioning and extinction are the result of neuromodulation-controlled LTP at synapses of thalamic, cortical, and hippocampal afferents on principal cells and inhibitory interneurons of lateral and basal amygdala. The phenomena accounted for by the model (and simulated by the computational version include conditioning, secondary reinforcement, blocking, the immediate shock deficit, extinction, renewal, and a range of empirically valid effects of pre- and post-training ablation or inactivation of hippocampus or amygdala nuclei.

  15. Understanding Karma Police: The Perceived Plausibility of Noun Compounds as Predicted by Distributional Models of Semantic Representation

    Science.gov (United States)

    Günther, Fritz; Marelli, Marco

    2016-01-01

    Noun compounds, consisting of two nouns (the head and the modifier) that are combined into a single concept, differ in terms of their plausibility: school bus is a more plausible compound than saddle olive. The present study investigates which factors influence the plausibility of attested and novel noun compounds. Distributional Semantic Models (DSMs) are used to obtain formal (vector) representations of word meanings, and compositional methods in DSMs are employed to obtain such representations for noun compounds. From these representations, different plausibility measures are computed. Three of those measures contribute in predicting the plausibility of noun compounds: The relatedness between the meaning of the head noun and the compound (Head Proximity), the relatedness between the meaning of modifier noun and the compound (Modifier Proximity), and the similarity between the head noun and the modifier noun (Constituent Similarity). We find non-linear interactions between Head Proximity and Modifier Proximity, as well as between Modifier Proximity and Constituent Similarity. Furthermore, Constituent Similarity interacts non-linearly with the familiarity with the compound. These results suggest that a compound is perceived as more plausible if it can be categorized as an instance of the category denoted by the head noun, if the contribution of the modifier to the compound meaning is clear but not redundant, and if the constituents are sufficiently similar in cases where this contribution is not clear. Furthermore, compounds are perceived to be more plausible if they are more familiar, but mostly for cases where the relation between the constituents is less clear. PMID:27732599

  16. A biologically plausible model of human shape symmetry perception.

    Science.gov (United States)

    Poirier, Frédéric J A M; Wilson, Hugh R

    2010-01-19

    Symmetry is usually computationally expensive to encode reliably, and yet it is relatively effortless to perceive. Here, we extend F. J. A. M. Poirier and H. R. Wilson's (2006) model for shape perception to account for H. R. Wilson and F. Wilkinson's (2002) data on shape symmetry. Because the model already accounts for shape perception, only minimal neural circuitry is required to enable it to encode shape symmetry as well. The model is composed of three main parts: (1) recovery of object position using large-scale non-Fourier V4-like concentric units that respond at the center of concentric contour segments across orientations, (2) around that recovered object center, curvature mechanisms combine multiplicatively the responses of oriented filters to encode object-centric local shape information, with a preference for convexities, and (3) object-centric symmetry mechanisms. Model and human performances are comparable for symmetry perception of shapes. Moreover, with some improvement of edge recovery, the model can encode symmetry axes in natural images such as faces.

  17. Physically plausible prescription of land surface model soil moisture

    Science.gov (United States)

    Hauser, Mathias; Orth, René; Thiery, Wim; Seneviratne, Sonia

    2016-04-01

    Land surface hydrology is an important control of surface weather and climate, especially under extreme dry or wet conditions where it can amplify heat waves or floods, respectively. Prescribing soil moisture in land surface models is a valuable technique to investigate this link between hydrology and climate. It has been used for example to assess the influence of soil moisture on temperature variability, mean and extremes (Seneviratne et al. 2006, 2013, Lorenz et al., 2015). However, perturbing the soil moisture content artificially can lead to a violation of the energy and water balances. Here we present a new method for prescribing soil moisture which ensures water and energy balance closure by using only water from runoff and a reservoir term. If water is available, the method prevents soil moisture decrease below climatological values. Results from simulations with the Community Land Model (CLM) indicate that our new method allows to avoid soil moisture deficits in many regions of the world. We show the influence of the irrigation-supported soil moisture content on mean and extreme temperatures and contrast our findings with that of earlier studies. Additionally, we will assess how long into the 21st century the new method will be able to maintain present-day climatological soil moisture levels for different regions. Lorenz, R., Argüeso, D., Donat, M.G., Pitman, A.J., den Hurk, B.V., Berg, A., Lawrence, D.M., Chéruy, F., Ducharne, A., Hagemann, S. and Meier, A., 2015. Influence of land-atmosphere feedbacks on temperature and precipitation extremes in the GLACE-CMIP5 ensemble. Journal of Geophysical Research: Atmospheres. Seneviratne, S.I., Lüthi, D., Litschi, M. and Schär, C., 2006. Land-atmosphere coupling and climate change in Europe. Nature, 443(7108), pp.205-209. Seneviratne, S.I., Wilhelm, M., Stanelle, T., Hurk, B., Hagemann, S., Berg, A., Cheruy, F., Higgins, M.E., Meier, A., Brovkin, V. and Claussen, M., 2013. Impact of soil moisture

  18. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation

    KAUST Repository

    Abdellah, Marwan

    2017-02-15

    Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.

  19. A simple biophysically plausible model for long time constants in single neurons.

    Science.gov (United States)

    Tiganj, Zoran; Hasselmo, Michael E; Howard, Marc W

    2015-01-01

    Recent work in computational neuroscience and cognitive psychology suggests that a set of cells that decay exponentially could be used to support memory for the time at which events took place. Analytically and through simulations on a biophysical model of an individual neuron, we demonstrate that exponentially decaying firing with a range of time constants up to minutes could be implemented using a simple combination of well-known neural mechanisms. In particular, we consider firing supported by calcium-controlled cation current. When the amount of calcium leaving the cell during an interspike interval is larger than the calcium influx during a spike, the overall decay in calcium concentration can be exponential, resulting in exponential decay of the firing rate. The time constant of the decay can be several orders of magnitude larger than the time constant of calcium clearance, and it could be controlled externally via a variety of biologically plausible ways. The ability to flexibly and rapidly control time constants could enable working memory of temporal history to be generalized to other variables in computing spatial and ordinal representations.

  20. Plausibility of stromal initiation of epithelial cancers without a mutation in the epithelium: a computer simulation of morphostats

    Directory of Open Access Journals (Sweden)

    Cappuccio Antonio

    2009-03-01

    Full Text Available Abstract Background There is experimental evidence from animal models favoring the notion that the disruption of interactions between stroma and epithelium plays an important role in the initiation of carcinogenesis. These disrupted interactions are hypothesized to be mediated by molecules, termed morphostats, which diffuse through the tissue to determine cell phenotype and maintain tissue architecture. Methods We developed a computer simulation based on simple properties of cell renewal and morphostats. Results Under the computer simulation, the disruption of the morphostat gradient in the stroma generated epithelial precursors of cancer without any mutation in the epithelium. Conclusion The model is consistent with the possibility that the accumulation of genetic and epigenetic changes found in tumors could arise after the formation of a founder population of aberrant cells, defined as cells that are created by low or insufficient morphostat levels and that no longer respond to morphostat concentrations. Because the model is biologically plausible, we hope that these results will stimulate further experiments.

  1. A biologically plausible model of time-scale invariant interval timing.

    Science.gov (United States)

    Almeida, Rita; Ledberg, Anders

    2010-02-01

    The temporal durations between events often exert a strong influence over behavior. The details of this influence have been extensively characterized in behavioral experiments in different animal species. A remarkable feature of the data collected in these experiments is that they are often time-scale invariant. This means that response measurements obtained under intervals of different durations coincide when plotted as functions of relative time. Here we describe a biologically plausible model of an interval timing device and show that it is consistent with time-scale invariant behavior over a substantial range of interval durations. The model consists of a set of bistable units that switch from one state to the other at random times. We first use an abstract formulation of the model to derive exact expressions for some key quantities and to demonstrate time-scale invariance for any range of interval durations. We then show how the model could be implemented in the nervous system through a generic and biologically plausible mechanism. In particular, we show that any system that can display noise-driven transitions from one stable state to another can be used to implement the timing device. Our work demonstrates that a biologically plausible model can qualitatively account for a large body of data and thus provides a link between the biology and behavior of interval timing.

  2. A biological plausible Generalized Leaky Integrate-and-Fire neuron model.

    Science.gov (United States)

    Wang, Zhenzhong; Guo, Lilin; Adjouadi, Malek

    2014-01-01

    This study introduces a new Generalized Leaky Integrate-and-Fire (GLIF) neuron model. Unlike Normal Leaky Integrate-and-Fire (NLIF) models, the leaking resistor in the GLIF model equation is assumed to be variable, and an additional term would have the bias current added to the model equation in order to improve the accuracy. Adjusting the parameters defined for the leaking resistor and bias current, a GLIF model could be accurately matched to any Hodgkin-Huxley (HH) model and be able to reproduce plausible biological neuron behaviors.

  3. Identifying plausible genetic models based on association and linkage results: application to type 2 diabetes.

    Science.gov (United States)

    Guan, Weihua; Boehnke, Michael; Pluzhnikov, Anna; Cox, Nancy J; Scott, Laura J

    2012-12-01

    When planning resequencing studies for complex diseases, previous association and linkage studies can constrain the range of plausible genetic models for a given locus. Here, we explore the combinations of causal risk allele frequency (RAFC ) and genotype relative risk (GRRC ) consistent with no or limited evidence for affected sibling pair (ASP) linkage and strong evidence for case-control association. We find that significant evidence for case-control association combined with no or moderate evidence for ASP linkage can define a lower bound for the plausible RAFC . Using data from large type 2 diabetes (T2D) linkage and genome-wide association study meta-analyses, we find that under reasonable model assumptions, 23 of 36 autosomal T2D risk loci are unlikely to be due to causal variants with combined RAFC < 0.005, and four of the 23 are unlikely to be due to causal variants with combined RAFC < 0.05.

  4. A Biomass-based Model to Estimate the Plausibility of Exoplanet Biosignature Gases

    CERN Document Server

    Seager, S; Hu, R

    2013-01-01

    Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H2S, CH4, CH3Cl, and DMS. We have applied the models to propose NH3 as a biosignature gas on a "cold Haber World," a planet with a N2-H2 atmosphere, and to demonstrate why gases such as CH3Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such...

  5. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    Directory of Open Access Journals (Sweden)

    Ronald van den Berg

    2010-01-01

    Full Text Available An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  6. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    Science.gov (United States)

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  7. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  8. Charting plausible futures for diabetes prevalence in the United States: a role for system dynamics simulation modeling.

    Science.gov (United States)

    Milstein, Bobby; Jones, Andrew; Homer, Jack B; Murphy, Dara; Essien, Joyce; Seville, Don

    2007-07-01

    Healthy People 2010 (HP 2010) objectives call for a 38% reduction in the prevalence of diagnosed diabetes mellitus, type 1 and type 2, by the year 2010. The process for setting this objective, however, did not focus on the achievability or the compatibility of this objective with other national public health objectives. We used a dynamic simulation model to explore plausible trajectories for diabetes prevalence in the wake of rising levels of obesity in the U.S. population. The model helps to interpret historic trends in diabetes prevalence in the United States and to anticipate plausible future trends through 2010. We conducted simulation experiments using a computer model of diabetes population dynamics to 1) track the rates at which people develop diabetes, are diagnosed with the disease, and die, and 2) assess the effects of various preventive-care interventions. System dynamics modeling methodology based on data from multiple sources guided the analyses. With the number of new cases of diabetes being much greater than the number of deaths among those with the disease, the prevalence of diagnosed diabetes in the United States is likely to continue to increase. Even a 29% reduction in the number of new cases (the HP 2010 objective) would only slow the growth, not reverse it. Increased diabetes detection rates or decreased mortality rates--also HP 2010 objectives--would further increase diagnosed prevalence. The HP 2010 objective for reducing diabetes prevalence is unattainable given the historical processes that are affecting incidence, diagnosis, and mortality, and even a zero-growth future is unlikely. System dynamics modeling shows why interventions to protect against chronic diseases have only gradual effects on their diagnosed prevalence.

  9. Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics.

    Science.gov (United States)

    Gao, Dashan; Vasconcelos, Nuno

    2009-01-01

    A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.

  10. Computational models of syntactic acquisition.

    Science.gov (United States)

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  11. One-pot synthesis of tetrazole-1,2,5,6-tetrahydronicotinonitriles and cholinesterase inhibition: Probing the plausible reaction mechanism via computational studies.

    Science.gov (United States)

    Hameed, Abdul; Zehra, Syeda Tazeen; Abbas, Saba; Nisa, Riffat Un; Mahmood, Tariq; Ayub, Khurshid; Al-Rashida, Mariya; Bajorath, Jürgen; Khan, Khalid Mohammed; Iqbal, Jamshed

    2016-04-01

    In the present study, one-pot synthesis of 1H-tetrazole linked 1,2,5,6-tetrahydronicotinonitriles under solvent-free conditions have been carried out in the presence of tetra-n-butyl ammonium fluoride trihydrated (TBAF) as catalyst and solvent. Computational studies have been conducted to elaborate two plausible mechanistic pathways of this one-pot reaction. Moreover, the synthesized compounds were screened for cholinesterases (acetylcholinesterase and butyrylcholinesterase) inhibition which are consider to be major malefactors of Alzheimer's disease (AD) to find lead compounds for further research in AD therapy.

  12. Looking for plausibility

    CERN Document Server

    Abdullah, Wan Ahmad Tajuddin Wan

    2010-01-01

    In the interpretation of experimental data, one is actually looking for plausible explanations. We look for a measure of plausibility, with which we can compare different possible explanations, and which can be combined when there are different sets of data. This is contrasted to the conventional measure for probabilities as well as to the proposed measure of possibilities. We define what characteristics this measure of plausibility should have. In getting to the conception of this measure, we explore the relation of plausibility to abductive reasoning, and to Bayesian probabilities. We also compare with the Dempster-Schaefer theory of evidence, which also has its own definition for plausibility. Abduction can be associated with biconditionality in inference rules, and this provides a platform to relate to the Collins-Michalski theory of plausibility. Finally, using a formalism for wiring logic onto Hopfield neural networks, we ask if this is relevant in obtaining this measure.

  13. In Silico Structure Prediction of Human Fatty Acid Synthase-Dehydratase: A Plausible Model for Understanding Active Site Interactions.

    Science.gov (United States)

    John, Arun; Umashankar, Vetrivel; Samdani, A; Sangeetha, Manoharan; Krishnakumar, Subramanian; Deepa, Perinkulam Ravi

    2016-01-01

    Fatty acid synthase (FASN, UniProt ID: P49327) is a multienzyme dimer complex that plays a critical role in lipogenesis. Consequently, this lipogenic enzyme has gained tremendous biomedical importance. The role of FASN and its inhibition is being extensively researched in several clinical conditions, such as cancers, obesity, and diabetes. X-ray crystallographic structures of some of its domains, such as β-ketoacyl synthase, acetyl transacylase, malonyl transacylase, enoyl reductase, β-ketoacyl reductase, and thioesterase, (TE) are already reported. Here, we have attempted an in silico elucidation of the uncrystallized dehydratase (DH) catalytic domain of human FASN. This theoretical model for DH domain was predicted using comparative modeling methods. Different stand-alone tools and servers were used to validate and check the reliability of the predicted models, which suggested it to be a highly plausible model. The stereochemical analysis showed 92.0% residues in favorable region of Ramachandran plot. The initial physiological substrate β-hydroxybutyryl group was docked into active site of DH domain using Glide. The molecular dynamics simulations carried out for 20 ns in apo and holo states indicated the stability and accuracy of the predicted structure in solvated condition. The predicted model provided useful biochemical insights into the substrate-active site binding mechanisms. This model was then used for identifying potential FASN inhibitors using high-throughput virtual screening of the National Cancer Institute database of chemical ligands. The inhibitory efficacy of the top hit ligands was validated by performing molecular dynamics simulation for 20 ns, where in the ligand NSC71039 exhibited good enzyme inhibition characteristics and exhibited dose-dependent anticancer cytotoxicity in retinoblastoma cancer cells in vitro.

  14. Testing the physiological plausibility of conflicting psychological models of response inhibition: A forward inference fMRI study.

    Science.gov (United States)

    Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe

    2017-08-30

    The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    Science.gov (United States)

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  16. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  17. Emergent structured transition from variation to repetition in a biologically-plausible model of learning in basal ganglia.

    Directory of Open Access Journals (Sweden)

    Ashvin eShah

    2014-02-01

    Full Text Available Often, when animals encounter an unexpected sensory event, they transition from executing a variety of movements to repeating the movement(s that may have caused the event. According to a recent theory of action discovery (Redgrave and Gurney 2006, repetition allows the animal to represent those movements, and the outcome, as an action for later recruitment. The transition from variation to repetition often follows a non-random, structured, pattern. While the structure of the pattern can be explained by sophisticated cognitive mechanisms, simpler mechanisms based on dopaminergic modulation of basal ganglia (BG activity are thought to underlie action discovery (Redgrave and Gurney 2006. In this paper we ask the question: can simple BG-mediated mechanisms account for a structured transition from variation to repetition, or are more sophisticated cognitive mechanisms always necessary?To address this question, we present a computational model of BG-mediated biasing of behavior. In our model, unlike most other models of BG function, the BG biases behaviour through modulation of cortical response to excitation; many possible movements are represented by the cortical area; and excitation to the cortical area is topographically-organized. We subject the model to simple reaching tasks, inspired by behavioral studies, in which a location to which to reach must be selected. Locations within a target area elicit a reinforcement signal. A structured transition from variation to repetition emerges from simple BG-mediated biasing of cortical response to excitation. We show how the structured pattern influences behavior in simple and complicated tasks. We also present analyses that describe the structured transition from variation to repetition due to BG-mediated biasing and from biasing that would be expected from a type of cognitive biasing, allowing us to compare behaviour resulting from these types of biasing and make connections with future behavioural

  18. Modeling Trusted Computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Shuyi; WEN Yingyou; ZHAO Hong

    2006-01-01

    In this paper, a formal approach based on predicate logic is proposed for representing and reasoning of trusted computing models. Predicates are defined to represent the characteristics of the objects and the relationship among these objects in a trusted system according to trusted computing specifications. Inference rules of trusted relation are given too. With the semantics proposed, some trusted computing models are formalized and verified, which shows that Predicate calculus logic provides a general and effective method for modeling and reasoning trusted computing systems.

  19. Is the Framework of Cohn's 'Tritope Model' for How T Cell Receptors Recognize Peptide/Self-MHC Complexes and Allo-MHC Plausible?

    Science.gov (United States)

    Bretscher, Peter A

    2016-05-01

    Cohn has developed the tritope model to describe how distinct domains of the T cell receptor (TcR) recognize peptide/self-MHC complexes and allo-MHC. He has over the years employed this model as a framework for considering how the TcR might mediate various signals [1-5]. In a recent publication [5], Cohn employs the Tritope Model to propose a detailed mechanism for the T cell receptor's involvement in positive thymic selection [5]. During a review of this proposal, I became uneasy over the plausibility of the underlying framework of the Tritope Model. I outline here the evolutionary considerations making me question this framework. I also suggest that the proposed framework underlying the Tritope Model makes strong predictions whose validity can most probably be assessed by considering observations reported in the literature.

  20. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  1. Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Directory of Open Access Journals (Sweden)

    Mark Daniel Lescroart

    2015-11-01

    Full Text Available Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA, Retrosplenial Complex (RSC, and the Occipital Place Area (OPA. It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1 2D features related to Fourier power; (2 3D spatial features such as the distance to objects in a scene; or (3 abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM to BOLD fMRI responses elicited by a set of 1,386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.

  2. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  3. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies dif

  4. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  5. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  6. Computationally modeling interpersonal trust.

    Science.gov (United States)

    Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David

    2013-01-01

    We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  7. A model of cardiovascular disease giving a plausible mechanism for the effect of fractionated low-dose ionizing radiation exposure.

    Directory of Open Access Journals (Sweden)

    Mark P Little

    2009-10-01

    Full Text Available Atherosclerosis is the main cause of coronary heart disease and stroke, the two major causes of death in developed society. There is emerging evidence of excess risk of cardiovascular disease at low radiation doses in various occupationally exposed groups receiving small daily radiation doses. Assuming that they are causal, the mechanisms for effects of chronic fractionated radiation exposures on cardiovascular disease are unclear. We outline a spatial reaction-diffusion model for atherosclerosis and perform stability analysis, based wherever possible on human data. We show that a predicted consequence of multiple small radiation doses is to cause mean chemo-attractant (MCP-1 concentration to increase linearly with cumulative dose. The main driver for the increase in MCP-1 is monocyte death, and consequent reduction in MCP-1 degradation. The radiation-induced risks predicted by the model are quantitatively consistent with those observed in a number of occupationally-exposed groups. The changes in equilibrium MCP-1 concentrations with low density lipoprotein cholesterol concentration are also consistent with experimental and epidemiologic data. This proposed mechanism would be experimentally testable. If true, it also has substantive implications for radiological protection, which at present does not take cardiovascular disease into account. The Japanese A-bomb survivor data implies that cardiovascular disease and cancer mortality contribute similarly to radiogenic risk. The major uncertainty in assessing the low-dose risk of cardiovascular disease is the shape of the dose response relationship, which is unclear in the Japanese data. The analysis of the present paper suggests that linear extrapolation would be appropriate for this endpoint.

  8. Parameters, Predictions, and Evidence in Computational Modeling: A Statistical View Informed by ACT-R

    Science.gov (United States)

    Weaver, Rhiannon

    2008-01-01

    Model validation in computational cognitive psychology often relies on methods drawn from the testing of theories in experimental physics. However, applications of these methods to computational models in typical cognitive experiments can hide multiple, plausible sources of variation arising from human participants and from stochastic cognitive…

  9. Biologically Plausible, Human-scale Knowledge Representation

    Science.gov (United States)

    Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris

    2016-01-01

    Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that…

  10. From provocative narrative scenarios to quantitative biophysical model results: Simulating plausible futures to 2070 in an urbanizing agricultural watershed in Wisconsin, USA

    Science.gov (United States)

    Booth, E.; Chen, X.; Motew, M.; Qiu, J.; Zipper, S. C.; Carpenter, S. R.; Kucharik, C. J.; Steven, L. I.

    2015-12-01

    Scenario analysis is a powerful tool for envisioning future social-ecological change and its consequences on human well-being. Scenarios that integrate qualitative storylines and quantitative biophysical models can create a vivid picture of these potential futures but the integration process is not straightforward. We present - using the Yahara Watershed in southern Wisconsin (USA) as a case study - a method for developing quantitative inputs (climate, land use/cover, and land management) to drive a biophysical modeling suite based on four provocative and contrasting narrative scenarios that describe plausible futures of the watershed to 2070. The modeling suite consists of an agroecosystem model (AgroIBIS-VSF), hydrologic routing model (THMB), and empirical lake water quality model and estimates several biophysical indicators to evaluate the watershed system under each scenario. These indicators include water supply, lake flooding, agricultural production, and lake water quality. Climate (daily precipitation and air temperature) for each scenario was determined using statistics from 210 different downscaled future climate projections for two 20-year time periods (2046-2065 and 2081-2100) and modified using a stochastic weather generator to allow flexibility for matching specific climate events within the scenario narratives. Land use/cover for each scenario was determined first by quantifying changes in areal extent every decade for 15 categories at the watershed scale to be consistent with the storyline events and theme. Next, these changes were spatially distributed using a rule-based framework based on land suitability metrics that determine transition probabilities. Finally, agricultural inputs including manure and fertilizer application rates were determined for each scenario based on the prevalence of livestock, water quality regulations, and technological innovations. Each scenario is compared using model inputs (maps and time-series of land use/cover and

  11. Understanding Student Computational Thinking with Computational Modeling

    CERN Document Server

    Aiken, John M; Douglas, Scott S; Burk, John B; Scanlon, Erin M; Thoms, Brian D; Schatz, Michael F

    2012-01-01

    Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". Students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than obs...

  12. Computationally modeling interpersonal trust

    OpenAIRE

    Jin Joo eLee; Brad eKnox; Jolie eBaumann; Cynthia eBreazeal; David eDeSteno

    2013-01-01

    We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our pr...

  13. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  14. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  15. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  16. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  17. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  18. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    This paper provides a general overview of the present status regarding computational modeling of the flow of fresh concrete. The computational modeling techniques that can be found in the literature may be divided into three main families: single fluid simulations, numerical modeling of discrete...

  19. A Novel Forensic Computing Model

    Institute of Scientific and Technical Information of China (English)

    XU Yunfeng; LU Yansheng

    2006-01-01

    According to the requirement of computer forensic and network forensic, a novel forensic computing model is presented, which exploits XML/OEM/RM data model, Data fusion technology, forensic knowledgebase, inference mechanism of expert system and evidence mining engine. This model takes advantage of flexility and openness, so it can be widely used in mining evidence.

  20. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...... are generated through the template in ICAS-MoT and translated into a model object. Once in ICAS-MoT, the model is numerical analyzed, solved and identified. A computer-aided modeling framework integrating systematic model derivation and development tools has been developed. It includes features for model...

  1. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  2. Computational modelling flow and transport

    NARCIS (Netherlands)

    Stelling, G.S.; Booij, N.

    1999-01-01

    Lecture notes CT wa4340. Derivation of equations using balance principles; numerical treatment of ordinary differential equations; time dependent partial differential equations; the strucure of a computer model:DUFLO; usage of numerical models.

  3. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  4. Computation models of discourse

    Energy Technology Data Exchange (ETDEWEB)

    Brady, M.; Berwick, R.C.

    1983-01-01

    This book presents papers on artificial intelligence and natural language. Topics considered include recognizing intentions from natural language utterances, cooperative responses from a portable natural language database query system, natural language generation as a computational problem, focusing in the comprehension of definite anaphora, and factors in forming discourse-dependent descriptions.

  5. Plausible values: how to deal with their limitations.

    Science.gov (United States)

    Monseur, Christian; Adams, Raymond

    2009-01-01

    Rasch modeling and plausible values methodology were used to scale and report the results of the Organization for Economic Cooperation and Development's Programme for International Student Achievement (PISA). This article will describe the scaling approach adopted in PISA. In particular it will focus on the use of plausible values, a multiple imputation approach that is now commonly used in large-scale assessment. As with all imputation models the plausible values must be generated using models that are consistent with those used in subsequent data analysis. In the case of PISA the plausible value generation assumes a flat linear regression with all students' background variables collected through the international student questionnaire included as regressors. Further, like most linear models, homoscedasticity and normality of the conditional variance are assumed. This article will explore some of the implications of this approach. First, we will discuss the conditions under which the secondary analyses on variables not included in the model for generating the plausible values might be biased. Secondly, as plausible values were not drawn from a multi-level model, the article will explore the adequacy of the PISA procedures for estimating variance components when the data have a hierarchical structure.

  6. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    with them. As the required models may be complex and require multiple time and/or length scales, their development and application for product-process design is not trivial. Therefore, a systematic modeling framework can contribute by significantly reducing the time and resources needed for model...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  7. toolkit computational mesh conceptual model.

    Energy Technology Data Exchange (ETDEWEB)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  8. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  9. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  10. Integrated Computational Model Development

    Science.gov (United States)

    2014-03-01

    68.5%, 9.6% and 21.9%, respectively. The alloy density and Vickers microhardness were ρ = 8.23 ± 0.01 g/cm3 and Hv = 5288 ± 1 MPa. [3...and 3-D. Techniques to mechanically test materials at smaller scales were developed to better inform the deformation models. Also methods were...situ microscale tension testing technique was adapted to enable microscale fatigue testing on tensile dog-bone specimens. Microscale tensile fatigue

  11. Component Breakout Computer Model

    Science.gov (United States)

    1987-04-29

    Weapon Systems: A Policy Analysis." The Rand Graduate Institute. November 1983. Boger . D. "Statistical Models for Estimating Overhead Costs." M. S...SQUARE SCREEN PROGRAM BO DLS 70 LOCATE 3,5 100 PRINT " I I I I I I I I I I I I I I t I I I t I I i iiitiii I I I I i t I i 110 LOCATE 4,5 I 20...GOTO 4620 4610 REM ***********«««*«««**#«***********#******»,*###!^5|[^,„<c#,5|c„ dl -r C^M EED SUPPORT .c.50 REM A6(6)...N0 OF EMPLOYEES 4660 IF

  12. Efficient Computational Model of Hysteresis

    Science.gov (United States)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  13. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  14. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  15. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    Science.gov (United States)

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  16. Computational models of adult neurogenesis

    Science.gov (United States)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  17. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  18. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  19. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction...... of the classical rhetoric term of ’prosopopoeia’ into the debate on large technological systems. First, the paper introduces the paradoxical distinction/complicity by debating Gilbert Simondon’s notion of a ‘margin of indeterminacy’ vis-a-vis computing. Second, it debates the idea of prosopopoietic modeling......, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic aspects of John von Neumann’s First Draft of a Report on the EDVAC from 1945. b. Herbert Simon’s notion of simulation in The Science of the Artificial from the 1970s. c...

  20. Computer Profiling Based Model for Investigation

    Directory of Open Access Journals (Sweden)

    Neeraj Choudhary

    2011-10-01

    Full Text Available Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the informationneeded to decide whether manual analysis is required.

  1. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  2. FORENSIC COMPUTING MODELS: TECHNICAL OVERVIEW

    Directory of Open Access Journals (Sweden)

    Gulshan Shrivastava

    2012-05-01

    Full Text Available In this paper, we deal with introducing a technique of digital forensics for reconstruction of events or evidences after the commitment of a crime through any of the digital devices. It shows a clear transparency between Computer Forensics and Digital Forensics and gives a brief description about the classification of Digital Forensics. It has also been described that how the emergences of various digital forensic models help digital forensic practitioners and examiners in doing digital forensics. Further, discussed Merits and Demerits of the required models and review of every major model.

  3. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...... into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction......, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic aspects of John von Neumann’s First Draft of a Report on the EDVAC from 1945. b. Herbert Simon’s notion of simulation in The Science of the Artificial from the 1970s. c...

  4. Computational model for vocal tract dynamics in a suboscine bird

    Science.gov (United States)

    Assaneo, M. F.; Trevisan, M. A.

    2010-09-01

    In a recent work, active use of the vocal tract has been reported for singing oscines. The reconfiguration of the vocal tract during song serves to match its resonances to the syringeal fundamental frequency, demonstrating a precise coordination of the two main pieces of the avian vocal system for songbirds characterized by tonal songs. In this work we investigated the Great Kiskadee (Pitangus sulfuratus), a suboscine bird whose calls display a rich harmonic content. Using a recently developed mathematical model for the syrinx and a mobile vocal tract, we set up a computational model that provides a plausible reconstruction of the vocal tract movement using a few spectral features taken from the utterances. Moreover, synthetic calls were generated using the articulated vocal tract that accounts for all the acoustical features observed experimentally.

  5. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  6. Cosmic logic: a computational model

    Science.gov (United States)

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  7. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  8. What can we learn from Plausible Values?

    Science.gov (United States)

    Marsman, Maarten; Maris, Gunter; Bechger, Timo; Glas, Cees

    2016-06-01

    In this paper, we show that the marginal distribution of plausible values is a consistent estimator of the true latent variable distribution, and, furthermore, that convergence is monotone in an embedding in which the number of items tends to infinity. We use this result to clarify some of the misconceptions that exist about plausible values, and also show how they can be used in the analyses of educational surveys.

  9. A Reexamination of the Factor Structure of the Center for Epidemiologic Studies Depression Scale: Is a One-Factor Model Plausible?

    Science.gov (United States)

    Edwards, Michael C.; Cheavens, Jennifer S.; Heiy, Jane E.; Cukrowicz, Kelly C.

    2010-01-01

    The Center for Epidemiologic Studies Depression Scale (CES-D) is one of the most widely used measures of depressive symptoms in research today. The original psychometric work in support of the CES-D (Radloff, 1977) described a 4-factor model underlying the 20 items on the scale. Despite a long history of evidence supporting this structure,…

  10. A mathematical and biological plausible model of decision-execution regulation in "Go/No-Go" tasks: Focusing on the fronto-striatal-thalamic pathway.

    Science.gov (United States)

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza

    2017-07-01

    Discovering factors influencing the speed and accuracy of responses in tasks such as "Go/No-Go" is one of issues which have been raised in neurocognitive studies. Mathematical models are considered as tools to identify and to study decision making procedure from different aspects. In this paper, a mathematical model has been presented to show several factors can alter the output of decision making procedure before execution in a "Go/No-Go" task. The dynamic of this model has two stable fixed points, each of them corresponds to the "Press" and "Not-press" responses. This model that focuses on the fronto-striatal-thalamic direct and indirect pathways, receives planned decisions from frontal cortex and sends a regulated output to motor cortex for execution. The state-space analysis showed that several factors could affect the regulation procedure such as the input strength, noise value, initial condition, and the values of involved neurotransmitters. Some probable analytical reasons that may lead to changes in decision-execution regulation have been suggested as well. Bifurcation diagram analysis demonstrates that an optimal interaction between these factors can compensate the weaknesses of some others. It is predicted that abnormalities of response control in different brain disorders such as attention deficit hyperactivity disorder may be resolved by providing treatment techniques that target the regulation of the interaction. The model also suggests a possible justification to show why so many studies insist on the important role of dopamine in some brain disorders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Classification using sparse representations: a biologically plausible approach.

    Science.gov (United States)

    Spratling, M W

    2014-02-01

    Representing signals as linear combinations of basis vectors sparsely selected from an overcomplete dictionary has proven to be advantageous for many applications in pattern recognition, machine learning, signal processing, and computer vision. While this approach was originally inspired by insights into cortical information processing, biologically plausible approaches have been limited to exploring the functionality of early sensory processing in the brain, while more practical applications have employed non-biologically plausible sparse coding algorithms. Here, a biologically plausible algorithm is proposed that can be applied to practical problems. This algorithm is evaluated using standard benchmark tasks in the domain of pattern classification, and its performance is compared to a wide range of alternative algorithms that are widely used in signal and image processing. The results show that for the classification tasks performed here, the proposed method is competitive with the best of the alternative algorithms that have been evaluated. This demonstrates that classification using sparse representations can be performed in a neurally plausible manner, and hence, that this mechanism of classification might be exploited by the brain.

  12. Cosmic Logic: a Computational Model

    CERN Document Server

    Vanchurin, Vitaly

    2015-01-01

    We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or G{\\" o}del number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies...

  13. Probabilistic reasoning in intelligent systems networks of plausible inference

    CERN Document Server

    Pearl, Judea

    1988-01-01

    Probabilistic Reasoning in Intelligent Systems is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty--and offers techniques, based on belief networks, that provid

  14. Complex Learning in Bio-plausible Memristive Networks

    OpenAIRE

    Deng, Lei; Li, Guoqi; Deng, Ning; Dong WANG; Zhang, Ziyang; He, Wei; Li, Huanglong; Pei, Jing; Shi, Luping

    2015-01-01

    The emerging memristor-based neuromorphic engineering promises an efficient computing paradigm. However, the lack of both internal dynamics in the previous feedforward memristive networks and efficient learning algorithms in recurrent networks, fundamentally limits the learning ability of existing systems. In this work, we propose a framework to support complex learning functions by introducing dedicated learning algorithms to a bio-plausible recurrent memristive network with internal dynamic...

  15. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  16. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology...... adoption theories, such as Diffusion of Innovations, Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology. Further on, at research model for identification of Cloud Computing Adoption factors from a business model perspective is presented. The following business model building...

  17. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    adoption theories, such as Diffusion of Innovations, Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology. Further on, at research model for identification of Cloud Computing Adoption factors from a business model perspective is presented. The following business model building......The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology...

  18. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    Science.gov (United States)

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-07

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.

  19. Los Alamos Center for Computer Security formal computer security model

    Energy Technology Data Exchange (ETDEWEB)

    Dreicer, J.S.; Hunteman, W.J.; Markin, J.T.

    1989-01-01

    This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The need to test and verify DOE computer security policy implementation first motivated this effort. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present formal mathematical models for computer security. The fundamental objective of computer security is to prevent the unauthorized and unaccountable access to a system. The inherent vulnerabilities of computer systems result in various threats from unauthorized access. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The model is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell and LaPadula abstract sets of objects and subjects. 6 refs.

  20. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  1. Computational modelling of SCC flow

    DEFF Research Database (Denmark)

    Geiker, Mette Rica; Thrane, Lars Nyholm; Szabo, Peter

    2005-01-01

    To benefit from the full potential of self-compacting concrete (SCC) prediction tools are needed for the form filling of SCC. Such tools should take into account the properties of the concrete, the shape and size of the structural element, the position of rebars, and the casting technique. Exampl...... of computational models for the time dependent flow behavior are given, and advantages and disadvantages of discrete particle and single fluid models are briefly described.......To benefit from the full potential of self-compacting concrete (SCC) prediction tools are needed for the form filling of SCC. Such tools should take into account the properties of the concrete, the shape and size of the structural element, the position of rebars, and the casting technique. Examples...

  2. Computer modeling of piezoresistive gauges

    Energy Technology Data Exchange (ETDEWEB)

    Nutt, G. L.; Hallquist, J. O.

    1981-08-07

    A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medum subjected to a 0.5 Gp plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 2.7 Gp shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shocked medium.

  3. Towards the Epidemiological Modeling of Computer Viruses

    OpenAIRE

    Xiaofan Yang; Lu-Xing Yang

    2012-01-01

    Epidemic dynamics of computer viruses is an emerging discipline aiming to understand the way that computer viruses spread on networks. This paper is intended to establish a series of rational epidemic models of computer viruses. First, a close inspection of some common characteristics shared by all typical computer viruses clearly reveals the flaws of previous models. Then, a generic epidemic model of viruses, which is named as the SLBS model, is proposed. Finally, diverse generalizations of ...

  4. Plausibility functions and exact frequentist inference

    CERN Document Server

    Martin, Ryan

    2012-01-01

    In the frequentist program, inferential methods with exact control on error rates are a primary focus. Methods based on asymptotic distribution theory may not be suitable in a particular problem, in which case, a numerical method is needed. This paper presents a general, Monte Carlo-driven framework for the construction of frequentist procedures based on plausibility functions. It is proved that the suitably defined plausibility function-based tests and confidence regions have desired frequentist properties. Moreover, in an important special case involving likelihood ratios, conditions are given such that the plausibility function behaves asymptotically like a consistent Bayesian posterior distribution. An extension of the proposed method is also given for the case where nuisance parameters are present. A number of examples are given which illustrate the method and demonstrate its strong performance compared to other popular existing methods.

  5. A cognitively plausible model for grammar induction

    Directory of Open Access Journals (Sweden)

    Roni Katzir

    2015-01-01

    Full Text Available This paper aims to bring theoretical linguistics and cognition-general theories of learning into closer contact. I argue that linguists' notions of rich UGs are well-founded, but that cognition-general learning approaches are viable as well and that the two can and should co-exist and support each other. Specifically, I use the observation that any theory of UG provides a learning criterion -- the total memory space used to store a grammar and its encoding of the input -- that supports learning according to the principle of Minimum Description-Length. This mapping from UGs to learners maintains a minimal ontological commitment: the learner for a particular UG uses only what is already required to account for linguistic competence in adults. I suggest that such learners should be our null hypothesis regarding the child's learning mechanism, and that furthermore, the mapping from theories of UG to learners provides a framework for comparing theories of UG.

  6. Towards the Epidemiological Modeling of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2012-01-01

    Full Text Available Epidemic dynamics of computer viruses is an emerging discipline aiming to understand the way that computer viruses spread on networks. This paper is intended to establish a series of rational epidemic models of computer viruses. First, a close inspection of some common characteristics shared by all typical computer viruses clearly reveals the flaws of previous models. Then, a generic epidemic model of viruses, which is named as the SLBS model, is proposed. Finally, diverse generalizations of the SLBS model are suggested. We believe this work opens a door to the full understanding of how computer viruses prevail on the Internet.

  7. Quantum Computation Beyond the Circuit Model

    OpenAIRE

    Jordan, Stephen P.

    2008-01-01

    The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary background material for a ge...

  8. Computational modeling of epithelial tissues.

    Science.gov (United States)

    Smallwood, Rod

    2009-01-01

    There is an extensive literature on the computational modeling of epithelial tissues at all levels from subcellular to whole tissue. This review concentrates on behavior at the individual cell to whole tissue level, and particularly on organizational aspects, and provides an indication of where information from other areas, such as the modeling of angiogenesis, is relevant. The skin, and the lining of all of the body cavities (lung, gut, cervix, bladder etc) are epithelial tissues, which in a topological sense are the boundary between inside and outside the body. They are thin sheets of cells (usually of the order of 0.5 mm thick) without extracellular matrix, have a relatively simple structure, and contain few types of cells. They have important barrier, secretory and transport functions, which are essential for the maintenance of life, so homeostasis and wound healing are important aspects of the behavior of epithelial tissues. Carcinomas originate in epithelial tissues.There are essentially two approaches to modeling tissues--to start at the level of the tissue (i.e., a length scale of the order of 1 mm) and develop generalized equations for behavior (a continuum approach); or to start at the level of the cell (i.e., a length scale of the order of 10 µm) and develop tissue behavior as an emergent property of cellular behavior (an individual-based approach). As will be seen, these are not mutually exclusive approaches, and they come in a variety of flavors.

  9. Linguistics Computation, Automatic Model Generation, and Intensions

    CERN Document Server

    Nourani, C F

    1994-01-01

    Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computational Linguistics, Reasoning Models, G-diagrams For Models, Dynamic Model Implementation, Linguistics and Logics For Artificial Intelligence

  10. Model dynamics for quantum computing

    Science.gov (United States)

    Tabakin, Frank

    2017-08-01

    A model master equation suitable for quantum computing dynamics is presented. In an ideal quantum computer (QC), a system of qubits evolves in time unitarily and, by virtue of their entanglement, interfere quantum mechanically to solve otherwise intractable problems. In the real situation, a QC is subject to decoherence and attenuation effects due to interaction with an environment and with possible short-term random disturbances and gate deficiencies. The stability of a QC under such attacks is a key issue for the development of realistic devices. We assume that the influence of the environment can be incorporated by a master equation that includes unitary evolution with gates, supplemented by a Lindblad term. Lindblad operators of various types are explored; namely, steady, pulsed, gate friction, and measurement operators. In the master equation, we use the Lindblad term to describe short time intrusions by random Lindblad pulses. The phenomenological master equation is then extended to include a nonlinear Beretta term that describes the evolution of a closed system with increasing entropy. An external Bath environment is stipulated by a fixed temperature in two different ways. Here we explore the case of a simple one-qubit system in preparation for generalization to multi-qubit, qutrit and hybrid qubit-qutrit systems. This model master equation can be used to test the stability of memory and the efficacy of quantum gates. The properties of such hybrid master equations are explored, with emphasis on the role of thermal equilibrium and entropy constraints. Several significant properties of time-dependent qubit evolution are revealed by this simple study.

  11. Modeling Cu2+-Aβ complexes from computational approaches

    Science.gov (United States)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  12. Modeling Cu2+-Aβ complexes from computational approaches

    Directory of Open Access Journals (Sweden)

    Jorge Alí-Torres

    2015-09-01

    Full Text Available Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD, in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS. A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  13. Modeling Cu{sup 2+}-Aβ complexes from computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2015-09-15

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  14. Computational modeling of membrane proteins.

    Science.gov (United States)

    Koehler Leman, Julia; Ulmschneider, Martin B; Gray, Jeffrey J

    2015-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefited from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade.

  15. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  16. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  17. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  18. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  19. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  20. Representations of physical plausibility revealed by event-related potentials.

    Science.gov (United States)

    Roser, Matthew E; Fugelsang, Jonathan A; Handy, Todd C; Dunbar, Kevin N; Gazzaniga, Michael S

    2009-08-05

    Maintaining an accurate mental representation of the current environment is crucial to detecting change in that environment and ensuring behavioral coherence. Past experience with interactions between objects, such as collisions, has been shown to influence the perception of object interactions. To assess whether mental representations of object interactions derived from experience influence the maintenance of a mental model of the current stimulus environment, we presented physically plausible and implausible collision events while recording brain electrical activity. The parietal P300 response to 'oddball' events was found to be modulated by the physical plausibility of the stimuli, suggesting that past experience of object interactions can influence working memory processes involved in monitoring ongoing changes to the environment.

  1. Model of computation for Fourier optical processors

    Science.gov (United States)

    Naughton, Thomas J.

    2000-05-01

    We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.

  2. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism. Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  3. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    SHI ZhiWei; SHI ZhongZhi; LIU Xi; SHI ZhiPing

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism.Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  4. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  5. Computational modelling of genome-wide [corrected] transcription assembly networks using a fluidics analogy.

    Directory of Open Access Journals (Sweden)

    Yousry Y Azmy

    Full Text Available Understanding how a myriad of transcription regulators work to modulate mRNA output at thousands of genes remains a fundamental challenge in molecular biology. Here we develop a computational tool to aid in assessing the plausibility of gene regulatory models derived from genome-wide expression profiling of cells mutant for transcription regulators. mRNA output is modelled as fluid flow in a pipe lattice, with assembly of the transcription machinery represented by the effect of valves. Transcriptional regulators are represented as external pressure heads that determine flow rate. Modelling mutations in regulatory proteins is achieved by adjusting valves' on/off settings. The topology of the lattice is designed by the experimentalist to resemble the expected interconnection between the modelled agents and their influence on mRNA expression. Users can compare multiple lattice configurations so as to find the one that minimizes the error with experimental data. This computational model provides a means to test the plausibility of transcription regulation models derived from large genomic data sets.

  6. Two Classes of Models of Granular Computing

    Institute of Scientific and Technical Information of China (English)

    Daowu Pei

    2006-01-01

    This paper reviews a class of important models of granular computing which are induced by equivalence relations, or by general binary relations, or by neighborhood systems, and propose a class of models of granular computing which are induced by coverings of the given universe.

  7. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  8. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  9. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  10. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  11. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    Science.gov (United States)

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  12. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    Science.gov (United States)

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  13. Computational modeling of lipoprotein metabolism

    NARCIS (Netherlands)

    Schalkwijk, Daniël Bernardus van

    2013-01-01

    This PhD thesis contains the following chapters. The first part, containing chapter 2 and 3 mainly concerns model development. Chapter 2 describes the development of a mathematical modeling framework within which different diagnostic models based on lipoprotein profiles can be developed, and a first

  14. Computer Aided Modelling – Opportunities and Challenges

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model-based solu......This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model...

  15. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  16. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  17. Visual and Computational Modelling of Minority Games

    OpenAIRE

    Robertas Damaševičius; Darius Ašeriškis

    2017-01-01

    The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting) of Minority Game using UAREI (User-Action-Rule-Entities-Interface) model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate...

  18. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  19. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  20. Computational models for analyzing lipoprotein profiles

    NARCIS (Netherlands)

    Graaf, A.A. de; Schalkwijk, D.B. van

    2011-01-01

    At present, several measurement technologies are available for generating highly detailed concentration-size profiles of lipoproteins, offering increased diagnostic potential. Computational models are useful in aiding the interpretation of these complex datasets and making the data more accessible f

  1. Informing mechanistic toxicology with computational molecular models.

    Science.gov (United States)

    Goldsmith, Michael R; Peterson, Shane D; Chang, Daniel T; Transue, Thomas R; Tornero-Velez, Rogelio; Tan, Yu-Mei; Dary, Curtis C

    2012-01-01

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo efforts. From a molecular biophysical ansatz, we describe how 3D molecular modeling methods used to numerically evaluate the classical pair-wise potential at the chemical/biological interface can inform mechanism of action and the dose-response paradigm of modern toxicology. With an emphasis on molecular docking, 3D-QSAR and pharmacophore/toxicophore approaches, we demonstrate how these methods can be integrated with chemoinformatic and toxicogenomic efforts into a tiered computational toxicology workflow. We describe generalized protocols in which 3D computational molecular modeling is used to enhance our ability to predict and model the most relevant toxicokinetic, metabolic, and molecular toxicological endpoints, thereby accelerating the computational toxicology-driven basis of modern risk assessment while providing a starting point for rational sustainable molecular design.

  2. Computational fluid dynamics modeling in yarn engineering

    CSIR Research Space (South Africa)

    Patanaik, A

    2011-07-01

    Full Text Available This chapter deals with the application of computational fluid dynamics (CFD) modeling in reducing yarn hairiness during the ring spinning process and thereby “engineering” yarn with desired properties. Hairiness significantly affects the appearance...

  3. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  4. A new epidemic model of computer viruses

    Science.gov (United States)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  5. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...... combined with a Markov Random Field regularisation method. Conceptually, the method maintains an implicit ideal description of the sought surface. This implicit surface is iteratively updated by realigning the input point sets and Markov Random Field regularisation. The regularisation is based on a prior...... energy that has earlier proved to be particularly well suited for human surface scans. The method has been tested on full cranial scans of ten test subjects and on several scans of the outer human ear....

  6. The Role of Plausible Values in Large-Scale Surveys

    Science.gov (United States)

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  7. Comprehending Conflicting Science-Related Texts: Graphs as Plausibility Cues

    Science.gov (United States)

    Isberner, Maj-Britt; Richter, Tobias; Maier, Johanna; Knuth-Herzig, Katja; Horz, Holger; Schnotz, Wolfgang

    2013-01-01

    When reading conflicting science-related texts, readers may attend to cues which allow them to assess plausibility. One such plausibility cue is the use of graphs in the texts, which are regarded as typical of "hard science." The goal of our study was to investigate the effects of the presence of graphs on the perceived plausibility and…

  8. Computer Model Locates Environmental Hazards

    Science.gov (United States)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  9. Invariant visual object recognition: biologically plausible approaches.

    Science.gov (United States)

    Robinson, Leigh; Rolls, Edmund T

    2015-10-01

    Key properties of inferior temporal cortex neurons are described, and then, the biological plausibility of two leading approaches to invariant visual object recognition in the ventral visual system is assessed to investigate whether they account for these properties. Experiment 1 shows that VisNet performs object classification with random exemplars comparably to HMAX, except that the final layer C neurons of HMAX have a very non-sparse representation (unlike that in the brain) that provides little information in the single-neuron responses about the object class. Experiment 2 shows that VisNet forms invariant representations when trained with different views of each object, whereas HMAX performs poorly when assessed with a biologically plausible pattern association network, as HMAX has no mechanism to learn view invariance. Experiment 3 shows that VisNet neurons do not respond to scrambled images of faces, and thus encode shape information. HMAX neurons responded with similarly high rates to the unscrambled and scrambled faces, indicating that low-level features including texture may be relevant to HMAX performance. Experiment 4 shows that VisNet can learn to recognize objects even when the view provided by the object changes catastrophically as it transforms, whereas HMAX has no learning mechanism in its S-C hierarchy that provides for view-invariant learning. This highlights some requirements for the neurobiological mechanisms of high-level vision, and how some different approaches perform, in order to help understand the fundamental underlying principles of invariant visual object recognition in the ventral visual stream.

  10. Biological modelling of a computational spiking neural network with neuronal avalanches

    Science.gov (United States)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  11. Parallel computing in atmospheric chemistry models

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D. [Lawrence Livermore National Lab., CA (United States). Atmospheric Sciences Div.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  12. A Computational Framework for Realistic Retina Modeling.

    Science.gov (United States)

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  13. Proceedings Fifth Workshop on Developments in Computational Models--Computational Models From Nature

    CERN Document Server

    Cooper, S Barry; 10.4204/EPTCS.9

    2009-01-01

    The special theme of DCM 2009, co-located with ICALP 2009, concerned Computational Models From Nature, with a particular emphasis on computational models derived from physics and biology. The intention was to bring together different approaches - in a community with a strong foundational background as proffered by the ICALP attendees - to create inspirational cross-boundary exchanges, and to lead to innovative further research. Specifically DCM 2009 sought contributions in quantum computation and information, probabilistic models, chemical, biological and bio-inspired ones, including spatial models, growth models and models of self-assembly. Contributions putting to the test logical or algorithmic aspects of computing (e.g., continuous computing with dynamical systems, or solid state computing models) were also very much welcomed.

  14. Neural networks, nativism, and the plausibility of constructivism.

    Science.gov (United States)

    Quartz, S R

    1993-09-01

    Recent interest in PDP (parallel distributed processing) models is due in part to the widely held belief that they challenge many of the assumptions of classical cognitive science. In the domain of language acquisition, for example, there has been much interest in the claim that PDP models might undermine nativism. Related arguments based on PDP learning have also been given against Fodor's anti-constructivist position--a position that has contributed to the widespread dismissal of constructivism. A limitation of many of the claims regarding PDP learning, however, is that the principles underlying this learning have not been rigorously characterized. In this paper, I examine PDP models from within the framework of Valiant's PAC (probably approximately correct) model of learning, now the dominant model in machine learning, and which applies naturally to neural network learning. From this perspective, I evaluate the implications of PDP models for nativism and Fodor's influential anti-constructivist position. In particular, I demonstrate that, contrary to a number of claims, PDP models are nativist in a robust sense. I also demonstrate that PDP models actually serve as a good illustration of Fodor's anti-constructivist position. While these results may at first suggest that neural network models in general are incapable of the sort of concept acquisition that is required to refute Fodor's anti-constructivist position, I suggest that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism. This alternative form of learning is a natural interpretation of the constructivist position in terms of neural network learning, as it employs learning algorithms that incorporate the addition of structure in addition to weight modification schemes. By demonstrating that there is a natural and plausible interpretation of constructivism in terms of neural network learning, the position that nativism is the only plausible model of

  15. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  16. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture, resulti

  17. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  18. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  19. Complex Learning in Bio-plausible Memristive Networks.

    Science.gov (United States)

    Deng, Lei; Li, Guoqi; Deng, Ning; Wang, Dong; Zhang, Ziyang; He, Wei; Li, Huanglong; Pei, Jing; Shi, Luping

    2015-06-19

    The emerging memristor-based neuromorphic engineering promises an efficient computing paradigm. However, the lack of both internal dynamics in the previous feedforward memristive networks and efficient learning algorithms in recurrent networks, fundamentally limits the learning ability of existing systems. In this work, we propose a framework to support complex learning functions by introducing dedicated learning algorithms to a bio-plausible recurrent memristive network with internal dynamics. We fabricate iron oxide memristor-based synapses, with well controllable plasticity and a wide dynamic range of excitatory/inhibitory connection weights, to build the network. To adaptively modify the synaptic weights, the comprehensive recursive least-squares (RLS) learning algorithm is introduced. Based on the proposed framework, the learning of various timing patterns and a complex spatiotemporal pattern of human motor is demonstrated. This work paves a new way to explore the brain-inspired complex learning in neuromorphic systems.

  20. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented......, focusing on universality of the ac response in the extreme disorder limit. Finally, some important unsolved problems relating to hopping models for ac conduction are listed....

  1. Slow Computing Simulation of Bio-plausible Control

    Science.gov (United States)

    2012-03-01

    following discussion have been derived using a few key assumptions: occlusions are ignored, the set of discrete luminance functions is sampled...simulation framework that was provided by the Murray group, and the Grand Unified Fly (GUF) simulation framework developed by Dr. Andrew Straw...sensor inputs, parallelizing sensor processing, and rapidly responding. Thus, the parallel nature of the processors is key to eliciting a low power

  2. Inference and Plausible Reasoning in a Natural Language Understanding System Based on Object-Oriented Semantics

    CERN Document Server

    Ostapov, Yuriy

    2012-01-01

    Algorithms of inference in a computer system oriented to input and semantic processing of text information are presented. Such inference is necessary for logical questions when the direct comparison of objects from a question and database can not give a result. The following classes of problems are considered: a check of hypotheses for persons and non-typical actions, the determination of persons and circumstances for non-typical actions, planning actions, the determination of event cause and state of persons. To form an answer both deduction and plausible reasoning are used. As a knowledge domain under consideration is social behavior of persons, plausible reasoning is based on laws of social psychology. Proposed algorithms of inference and plausible reasoning can be realized in computer systems closely connected with text processing (criminology, operation of business, medicine, document systems).

  3. Mechanistic models in computational social science

    Science.gov (United States)

    Holme, Petter; Liljeros, Fredrik

    2015-09-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  4. Mechanistic Models in Computational Social Science

    CERN Document Server

    Holme, Petter

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emerging phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  5. Computational modeling of failure in composite laminates

    NARCIS (Netherlands)

    Van der Meer, F.P.

    2010-01-01

    There is no state of the art computational model that is good enough for predictive simulation of the complete failure process in laminates. Already on the single ply level controversy exists. Much work has been done in recent years in the development of continuum models, but these fail to predict t

  6. Computational Intelligence. Mortality Models for the Actuary

    NARCIS (Netherlands)

    Willemse, W.J.

    2001-01-01

    This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's la

  7. Generating computational models for serious gaming

    NARCIS (Netherlands)

    Westera, Wim

    2014-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  8. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  9. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  10. Plausibility and evidence: the case of homeopathy.

    Science.gov (United States)

    Rutten, Lex; Mathie, Robert T; Fisher, Peter; Goossens, Maria; van Wassenhoven, Michel

    2013-08-01

    Homeopathy is controversial and hotly debated. The conclusions of systematic reviews of randomised controlled trials of homeopathy vary from 'comparable to conventional medicine' to 'no evidence of effects beyond placebo'. It is claimed that homeopathy conflicts with scientific laws and that homoeopaths reject the naturalistic outlook, but no evidence has been cited. We are homeopathic physicians and researchers who do not reject the scientific outlook; we believe that examination of the prior beliefs underlying this enduring stand-off can advance the debate. We show that interpretations of the same set of evidence--for homeopathy and for conventional medicine--can diverge. Prior disbelief in homeopathy is rooted in the perceived implausibility of any conceivable mechanism of action. Using the 'crossword analogy', we demonstrate that plausibility bias impedes assessment of the clinical evidence. Sweeping statements about the scientific impossibility of homeopathy are themselves unscientific: scientific statements must be precise and testable. There is growing evidence that homeopathic preparations can exert biological effects; due consideration of such research would reduce the influence of prior beliefs on the assessment of systematic review evidence.

  11. Parallel Computing of Ocean General Circulation Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper discusses the parallel computing of the thirdgeneration Ocea n General Circulation Model (OGCM) from the State Key Laboratory of Numerical Mo deling for Atmospheric Science and Geophysical Fluid Dynamics(LASG),Institute of Atmosphere Physics(IAP). Meanwhile, several optimization strategies for paralle l computing of OGCM (POGCM) on Scalable Shared Memory Multiprocessor (S2MP) are presented. Using Message Passing Interface (MPI), we obtain super linear speedup on SGI Origin 2000 for parallel OGCM(POGCM) after optimization.

  12. On the completeness of quantum computation models

    CERN Document Server

    Arrighi, Pablo

    2010-01-01

    The notion of computability is stable (i.e. independent of the choice of an indexing) over infinite-dimensional vector spaces provided they have a finite "tensorial dimension". Such vector spaces with a finite tensorial dimension permit to define an absolute notion of completeness for quantum computation models and give a precise meaning to the Church-Turing thesis in the framework of quantum theory. (Extra keywords: quantum programming languages, denotational semantics, universality.)

  13. Security Management Model in Cloud Computing Environment

    OpenAIRE

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  14. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  15. A computational model of analogical reasoning

    Institute of Scientific and Technical Information of China (English)

    李波; 赵沁平

    1997-01-01

    A computational model of analogical reasoning is presented, which divides analogical reasoning process into four subprocesses, i.e. reminding, elaboration, matching and transfer. For each subprocess, its role and the principles it follows are given. The model is discussed in detail, including salient feature-based reminding, relevance-directed elaboration, an improved matching model and a transfer model. And the advantages of this model are summarized based on the results of BHARS, which is an analogical reasoning system implemented by this model.

  16. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  17. On the computational modeling of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    This work deals with the computational modeling and numerical simulation of Friction Stir Welding (FSW) processes. Here a quasi-static, transient, mixed stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermoplastic material models have been considered. A product formula algorithm, leading to a staggered solution scheme, has been used. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are c...

  18. An improved computational constitutive model for glass

    Science.gov (United States)

    Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.

    2017-01-01

    In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  19. Computer Aided Modelling – Opportunities and Challenges

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    -based solutions to significant problems? The important issues of workflow and data flow are discussed together with fit-for-purpose model development. As well, the lack of tools around multiscale modelling provides opportunities for the development of efficient tools to address such challenges. The ability......This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model...... and opportunities are discussed for such systems....

  20. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  1. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  2. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  3. Analisis Model Manajemen Insiden Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anggi Sukamto

    2015-05-01

    Full Text Available Dukungan teknologi informasi yang diterapkan oleh organisasi membutuhkan suatu manajemen agar penggunaannya dapat memenuhi tujuan penerapan teknologi tersebut. Salah satu kerangka kerja manajemen layanan teknologi informasi yang dapat diadopsi oleh organisasi adalah Information Technology Infrastructure Library (ITIL. Dukungan layanan (service support merupakan bagian dari proses ITIL. Pada umumnya, aktivitas dukungan layanan dilaksanakan dengan penggunaan teknologi yang dapat diakses melalui internet. Kondisi tersebut mengarah pada suatu konsep cloud computing. Cloud computing memungkinkan suatu instansi atau perusahaan untuk bisa mengatur sumber daya melalui jaringan internet. Fokus penelitian ini adalah menganalisis proses dan pelaku yang terlibat dalam dukungan layanan khususnya pada proses manajemen insiden, serta mengidentifikasi potensi penyerahan pelaku ke bentuk layanan cloud computing. Berdasarkan analisis yang dilakukan maka usulan model manajemen insiden berbasis cloud ini dapat diterapkan dalam suatu organisasi yang telah menggunakan teknologi komputer untuk mendukung kegiatan operasional. Kata Kunci—Cloud computing, ITIL, Manajemen Insiden, Service Support, Service Desk.

  4. Encoding the target or the plausible preview word? The nature of the plausibility preview benefit in reading Chinese.

    Science.gov (United States)

    Yang, Jinmian; Li, Nan; Wang, Suiping; Slattery, Timothy J; Rayner, Keith

    2014-01-01

    Previous studies have shown that a plausible preview word can facilitate the processing of a target word as compared to an implausible preview word (a plausibility preview benefit effect) when reading Chinese (Yang, Wang, Tong, & Rayner, 2012; Yang, 2013). Regarding the nature of this effect, it is possible that readers processed the meaning of the plausible preview word and did not actually encode the target word (given that the parafoveal preview word lies close to the fovea). The current experiment examined this possibility with three conditions wherein readers received a preview of a target word that was either (1) identical to the target word (identical preview), (2) a plausible continuation of the pre-target text, but the post-target text in the sentence was incompatible with it (initially plausible preview), or (3) not a plausible continuation of the pre-target text, nor compatible with the post-target text (implausible preview). Gaze durations on target words were longer in the initially plausible condition than the identical condition. Overall, the results showed a typical preview benefit, but also implied that readers did not encode the initially plausible preview. Also, a plausibility preview benefit was replicated: gaze durations were longer with implausible previews than the initially plausible ones. Furthermore, late eye movement measures did not reveal differences between the initially plausible and the implausible preview conditions, which argues against the possibility of misreading the plausible preview word as the target word. In sum, these results suggest that a plausible preview word provides benefit in processing the target word as compared to an implausible preview word, and this benefit is only present in early but not late eye movement measures.

  5. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  6. Utilizing computer models for optimizing classroom acoustics

    Science.gov (United States)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  7. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  8. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  9. Computer modeling of loudspeaker arrays in rooms

    Science.gov (United States)

    Schwenke, Roger

    2002-05-01

    Loudspeakers present a special challenge to computational modeling of rooms. When modeling a collection of noncorrelated sound sources, such as a group of musicians, coarse resolution power spectrum and directivities are sufficient. In contrast, a typical loudspeaker array consists of many speakers driven with the same signal, and are therefore almost completely correlated. This can lead to a quite complicated, but stable, pattern of spatial nulls and lobes which depends sensitively on frequency. It has been shown that, to model these interactions accurately, one must have loudspeaker data with 1 deg spatial resolution, 1/24 octave frequency resolution including phase. It will be shown that computer models at such a high resolution can in fact inform design decisions of loudspeaker arrays.

  10. Computational models for synthetic marine infrared clutter

    Science.gov (United States)

    Constantikes, Kim T.; Zysnarski, Adam H.

    1996-06-01

    The next generation of ship defense missiles will need to engage stealthy, passive, sea-skimming missiles. Detection and guidance will occur against a background of sea surface and horizon which can present significant clutter problems for infrared seekers, particularly when targets are comparatively dim. We need a variety of sea clutter models: statistical image models for signal processing algorithm design, clutter occurrence models for systems effectiveness assessment, and constructive image models for synthesizing very large field-of-view (FOV) images with high spatial and temporal resolution. We have implemented and tested such a constructive model. First principle models of water waves and light transport provide a computationally intensive clutter model implemented as a raytracer. Our models include sea, sky, and solar radiance; reflectance; attenuating atmospheres; constructive solid geometry targets; target and water wave dynamics; and simple sensor image formation.

  11. Computational modelling for dry-powder inhalers

    NARCIS (Netherlands)

    Kröger, Ralf; Woolhouse, Robert; Becker, Michael; Wachtel, Herbert; de Boer, Anne; Horner, Marc

    2012-01-01

    Computational fluid dynamics (CFD) is a simulation tool used for modelling powder flow through inhalers to allow optimisation both of device design and drug powder. Here, Ralf Kröger, Consulting Senior CFD Engineer, ANSYS Germany GmbH; Marc Horner, Lead Technical Services Engineer, Healthcare, ANSYS

  12. Agent based computational model of trust

    NARCIS (Netherlands)

    A. Gorobets (Alexander); B. Nooteboom (Bart)

    2004-01-01

    textabstractThis paper employs the methodology of Agent-Based Computational Economics (ACE) to investigate under what conditions trust can be viable in markets. The emergence and breakdown of trust is modeled in a context of multiple buyers and suppliers. Agents adapt their trust in a partner, the w

  13. Integer Programming Models for Computational Biology Problems

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lancia

    2004-01-01

    The recent years have seen an impressive increase in the use of Integer Programming models for the solution of optimization problems originating in Molecular Biology. In this survey, some of the most successful Integer Programming approaches are described, while a broad overview of application areas being is given in modern Computational Molecular Biology.

  14. Computational modelling for dry-powder inhalers

    NARCIS (Netherlands)

    Kröger, Ralf; Woolhouse, Robert; Becker, Michael; Wachtel, Herbert; de Boer, Anne; Horner, Marc

    2012-01-01

    Computational fluid dynamics (CFD) is a simulation tool used for modelling powder flow through inhalers to allow optimisation both of device design and drug powder. Here, Ralf Kröger, Consulting Senior CFD Engineer, ANSYS Germany GmbH; Marc Horner, Lead Technical Services Engineer, Healthcare,

  15. A Stochastic Dynamic Model of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2012-01-01

    Full Text Available A stochastic computer virus spread model is proposed and its dynamic behavior is fully investigated. Specifically, we prove the existence and uniqueness of positive solutions, and the stability of the virus-free equilibrium and viral equilibrium by constructing Lyapunov functions and applying Ito's formula. Some numerical simulations are finally given to illustrate our main results.

  16. STEW A Nonlinear Data Modeling Computer Program

    CERN Document Server

    Chen, H

    2000-01-01

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross sections. This report presents results of the modeling of the sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  17. STEW: A Nonlinear Data Modeling Computer Program

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.

    2000-03-04

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental {sup 239}Pu(n,f) and {sup 235}U(n,f) cross sections. This report presents results of the modeling of the {sup 239}Pu(n,f) and {sup 235}U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  18. Evaluating computational models of cholesterol metabolism.

    Science.gov (United States)

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  19. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  20. Mechanistic models in computational social science

    OpenAIRE

    Petter eHolme; Fredrik eLiljeros

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influenc...

  1. Mechanistic models in computational social science

    OpenAIRE

    Holme, Petter; Liljeros, Fredrik

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influ...

  2. A Dualistic Model To Describe Computer Architectures

    Science.gov (United States)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  3. Using critical evaluation to reappraise plausibility judgments: A critical cognitive component of conceptual change

    Science.gov (United States)

    Lombardi, D.

    2011-12-01

    Plausibility judgments-although well represented in conceptual change theories (see, for example, Chi, 2005; diSessa, 1993; Dole & Sinatra, 1998; Posner et al., 1982)-have received little empirical attention until our recent work investigating teachers' and students' understanding of and perceptions about human-induced climate change (Lombardi & Sinatra, 2010, 2011). In our first study with undergraduate students, we found that greater plausibility perceptions of human-induced climate accounted for significantly greater understanding of weather and climate distinctions after instruction, even after accounting for students' prior knowledge (Lombardi & Sinatra, 2010). In a follow-up study with inservice science and preservice elementary teachers, we showed that anger about the topic of climate change and teaching about climate change was significantly related to implausible perceptions about human-induced climate change (Lombardi & Sinatra, 2011). Results from our recent studies helped to inform our development of a model of the role of plausibility judgments in conceptual change situations. The model applies to situations involving cognitive dissonance, where background knowledge conflicts with an incoming message. In such situations, we define plausibility as a judgment on the relative potential truthfulness of incoming information compared to one's existing mental representations (Rescher, 1976). Students may not consciously think when making plausibility judgments, expending only minimal mental effort in what is referred to as an automatic cognitive process (Stanovich, 2009). However, well-designed instruction could facilitate students' reappraisal of plausibility judgments in more effortful and conscious cognitive processing. Critical evaluation specifically may be one effective method to promote plausibility reappraisal in a classroom setting (Lombardi & Sinatra, in progress). In science education, critical evaluation involves the analysis of how evidentiary

  4. Processor core model for quantum computing.

    Science.gov (United States)

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-09

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  5. Computer Model Of Fragmentation Of Atomic Nuclei

    Science.gov (United States)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  6. Mechanistic models in computational social science

    Directory of Open Access Journals (Sweden)

    Petter eHolme

    2015-09-01

    Full Text Available Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models, to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  7. Computational modelling of evolution: ecosystems and language

    CERN Document Server

    Lipowski, Adam

    2008-01-01

    Recently, computational modelling became a very important research tool that enables us to study problems that for decades evaded scientific analysis. Evolutionary systems are certainly examples of such problems: they are composed of many units that might reproduce, diffuse, mutate, die, or in some cases for example communicate. These processes might be of some adaptive value, they influence each other and occur on various time scales. That is why such systems are so difficult to study. In this paper we briefly review some computational approaches, as well as our contributions, to the evolution of ecosystems and language. We start from Lotka-Volterra equations and the modelling of simple two-species prey-predator systems. Such systems are canonical example for studying oscillatory behaviour in competitive populations. Then we describe various approaches to study long-term evolution of multi-species ecosystems. We emphasize the need to use models that take into account both ecological and evolutionary processe...

  8. Queuing theory models for computer networks

    Science.gov (United States)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  9. Computer Aided Design Modeling for Heterogeneous Objects

    CERN Document Server

    Gupta, Vikas; Tandon, Puneet

    2010-01-01

    Heterogeneous object design is an active research area in recent years. The conventional CAD modeling approaches only provide geometry and topology of the object, but do not contain any information with regard to the materials of the object and so can not be used for the fabrication of heterogeneous objects (HO) through rapid prototyping. Current research focuses on computer-aided design issues in heterogeneous object design. A new CAD modeling approach is proposed to integrate the material information into geometric regions thus model the material distributions in the heterogeneous object. The gradient references are used to represent the complex geometry heterogeneous objects which have simultaneous geometry intricacies and accurate material distributions. The gradient references helps in flexible manipulability and control to heterogeneous objects, which guarantees the local control over gradient regions of developed heterogeneous objects. A systematic approach on data flow, processing, computer visualizat...

  10. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  11. Computational Modeling of Vortex Generators for Turbomachinery

    Science.gov (United States)

    Chima, R. V.

    2002-01-01

    In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.

  12. Sticker DNA computer model--PartⅡ:Application

    Institute of Scientific and Technical Information of China (English)

    XU Jin; LI Sanping; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore, it arouses attention and interest of scientists in many fields. In this paper, we extend and improve the sticker model, which will be definitely beneficial to the construction of DNA computer. This paper is the second part of our series paper, which mainly focuses on the application of sticker model. It mainly consists of the following three sections: the matrix representation of sticker model is first presented; then a brief review of the past research on graph and combinatorial optimization, such as the minimal set covering problem, the vertex covering problem, Hamiltonian path or cycle problem, the maximal clique problem, the maximal independent problem and the Steiner spanning tree problem, is described; Finally a DNA algorithm for the graph isomorphic problem based on the sticker model is given.

  13. Computing the complexity for Schelling segregation models

    Science.gov (United States)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  14. Computer Modelling of 3D Geological Surface

    CERN Document Server

    Kodge, B G

    2011-01-01

    The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

  15. Computational Study of a Primitive Life Model

    Science.gov (United States)

    Andrecut, Mircea

    We present a computational study of a primitive life model. The calculation involves a discrete treatment of a partial differential equation and some details of that problems are explained. We show that the investigated model is equivalent to a diffusively coupled logistic lattice. The bifurcation diagrams were calculated for different values of the control parameters. The obtained diagrams have shown that the time dependence of the population of the investigated model exhibits transitions between ordered and chaotic behavior. We have investigated also the patterns formation in this system.

  16. Plausibility Judgments in Conceptual Change and Epistemic Cognition

    Science.gov (United States)

    Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.

    2016-01-01

    Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…

  17. Source Effects and Plausibility Judgments When Reading about Climate Change

    Science.gov (United States)

    Lombardi, Doug; Seyranian, Viviane; Sinatra, Gale M.

    2014-01-01

    Gaps between what scientists and laypeople find plausible may act as a barrier to learning complex and/or controversial socioscientific concepts. For example, individuals may consider scientific explanations that human activities are causing current climate change as implausible. This plausibility judgment may be due-in part-to individuals'…

  18. Computational Modeling of Pollution Transmission in Rivers

    Science.gov (United States)

    Parsaie, Abbas; Haghiabi, Amir Hamzeh

    2017-06-01

    Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.

  19. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  20. Computer Modelling and Simulation for Inventory Control

    Directory of Open Access Journals (Sweden)

    G.K. Adegoke

    2012-07-01

    Full Text Available This study concerns the role of computer simulation as a device for conducting scientific experiments on inventory control. The stores function utilizes a bulk of physical assets and engages a bulk of financial resources in a manufacturing outfit therefore there is a need for an efficient inventory control. The reason being that inventory control reduces cost of production and thereby facilitates the effective and efficient accomplishment of production objectives of an organization. Some mathematical and statistical models were used to compute the Economic Order Quantity (EOQ. Test data were gotten from a manufacturing company and same were simulated. The results generated were used to predict a real life situation and have been presented and discussed. The language of implementation for the three models is Turbo Pascal due to its capability, generality and flexibility as a scientific programming language.

  1. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  2. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  3. A Neural Computational Model of Incentive Salience

    OpenAIRE

    Jun Zhang; Berridge, Kent C; Amy J Tindell; Kyle S Smith; J Wayne Aldridge

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire ca...

  4. AMAR: A Computational Model of Autosegmental Phonology

    Science.gov (United States)

    1993-10-01

    the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t

  5. Computational Biology: Modeling Chronic Renal Allograft Injury.

    Science.gov (United States)

    Stegall, Mark D; Borrows, Richard

    2015-01-01

    New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury.

  6. Computational fluid dynamics modelling in cardiovascular medicine.

    Science.gov (United States)

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  7. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  8. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  9. Computational acoustic modeling of cetacean vocalizations

    Science.gov (United States)

    Gurevich, Michael Dixon

    A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it

  10. From bone to plausible bipedal locomotion. Part II: Complete motion synthesis for bipedal primates.

    Science.gov (United States)

    Nicolas, Guillaume; Multon, Franck; Berillon, Gilles

    2009-05-29

    This paper addresses the problem of synthesizing plausible bipedal locomotion according to 3D anatomical reconstruction and general hypotheses on human motion control strategies. In a previous paper [Nicolas, G., Multon, F., Berillon, G., Marchal, F., 2007. From bone to plausible bipedal locomotion using inverse kinematics. Journal of Biomechanics 40 (5) 1048-1057], we have validated a method based on using inverse kinematics to obtain plausible lower-limb motions knowing the trajectory of the ankle. In this paper, we propose a more general approach that also involves computing a plausible trajectory of the ankles for a given skeleton. The inputs are the anatomical descriptions of the bipedal species, imposed footprints and a rest posture. This process is based on optimizing a reference ankle trajectory until a set of criteria is minimized. This optimization loop is based on the assumption that a plausible motion is supposed to have little internal mechanical work and should be as less jerky as possible. For each tested ankle trajectory, inverse kinematics is used to compute a lower-body motion that enables us to compute the resulting mechanical work and jerk. This method was tested on a set of modern humans (male and female, with various anthropometric properties). We show that the results obtained with this method are close to experimental data for most of the subjects. We also demonstrate that the method is not sensitive to the choice of the reference ankle trajectory; any ankle trajectory leads to very similar result. We finally apply the method to a skeleton of Pan paniscus (Bonobo), and compare the resulting motion to those described by zoologists.

  11. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  12. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  13. Interlanguages and synchronic models of computation

    CERN Document Server

    Berka, Alexander Victor

    2010-01-01

    A novel language system has given rise to promising alternatives to standard formal and processor network models of computation. An interstring linked with a abstract machine environment, shares sub-expressions, transfers data, and spatially allocates resources for the parallel evaluation of dataflow. Formal models called the a-Ram family are introduced, designed to support interstring programming languages (interlanguages). Distinct from dataflow, graph rewriting, and FPGA models, a-Ram instructions are bit level and execute in situ. They support sequential and parallel languages without the space/time overheads associated with the Turing Machine and l-calculus, enabling massive programs to be simulated. The devices of one a-Ram model, called the Synchronic A-Ram, are fully connected and simpler than FPGA LUT's. A compiler for an interlanguage called Space, has been developed for the Synchronic A-Ram. Space is MIMD. strictly typed, and deterministic. Barring memory allocation and compilation, modules are ref...

  14. A Neural Computational Model of Incentive Salience

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C.; Tindell, Amy J.; Smith, Kyle S.; Aldridge, J. Wayne

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered ‘wanting’ only by

  15. A neural computational model of incentive salience.

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne

    2009-07-01

    Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating

  16. A neural computational model of incentive salience.

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2009-07-01

    Full Text Available Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues, incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue occurs during certain states, without necessarily requiring (relearning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization. Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by

  17. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  18. DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    Javed Ali

    2012-04-01

    Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.

  19. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  20. Computer model of tetrahedral amorphous diamond

    Science.gov (United States)

    Djordjević, B. R.; Thorpe, M. F.; Wooten, F.

    1995-08-01

    We computer generate a model of amorphous diamond using the Wooten-Weaire method, with fourfold coordination everywhere. We investigate two models: one where four-membered rings are allowed and the other where the four-membered rings are forbidden; each model consisting of 4096 atoms. Starting from the perfect diamond crystalline structure, we first randomize the structure by introducing disorder through random bond switches at a sufficiently high temperature. Subsequently, the temperature is reduced in stages, and the topological and geometrical relaxation of the structure takes place using the Keating potential. After a long annealing process, a random network of comparatively low energy is obtained. We calculate the pair distribution function, mean bond angle, rms angular deviation, rms bond length, rms bond-length deviation, and ring statistics for the final relaxed structures. We minimize the total strain energy by adjusting the density of the sample. We compare our results with similar computer-generated models for amorphous silicon, and with experimental measurement of the structure factor for (predominantly tetrahedral) amorphous carbon.

  1. Computer Generated Cardiac Model For Nuclear Medicine

    Science.gov (United States)

    Hills, John F.; Miller, Tom R.

    1981-07-01

    A computer generated mathematical model of a thallium-201 myocardial image is described which is based on realistic geometric and physiological assumptions. The left ventricle is represented by an ellipsoid truncated by aortic and mitral valve planes. Initially, an image of a motionless left ventricle is calculated with the location, size, and relative activity of perfusion defects selected by the designer. The calculation includes corrections for photon attenuation by overlying structures and the relative distribution of activity within the tissues. Motion of the ventricular walls is simulated either by a weighted sum of images at different stages in the cardiac cycle or by a blurring function whose width varies with position. Camera and collimator blurring are estimated by the MTF of the system measured at a representative depth in a phantom. Statistical noise is added using a Poisson random number generator. The usefulness of this model is due to two factors: the a priori characterization of location and extent of perfusion defects and the strong visual similarity of the images to actual clinical studies. These properties should permit systematic evaluation of image processing algorithms using this model. The principles employed in developing this cardiac image model can readily be applied to the simulation of other nuclear medicine studies and to other medical imaging modalities including computed tomography, ultrasound, and digital radiography.

  2. COMMON PHASES OF COMPUTER FORENSICS INVESTIGATION MODELS

    Directory of Open Access Journals (Sweden)

    Yunus Yusoff

    2011-06-01

    Full Text Available The increasing criminal activities using digital information as the means or targets warrant for a structured manner in dealing with them. Since 1984 when a formalized process been introduced, a great number of new and improved computer forensic investigation processes have been developed. In this paper, we reviewed a few selected investigation processes that have been produced throughout the yearsand then identified the commonly shared processes. Hopefully, with the identification of the commonly shard process, it would make it easier for the new users to understand the processes and also to serve as the basic underlying concept for the development of a new set of processes. Based on the commonly shared processes, we proposed a generic computer forensics investigation model, known as GCFIM.

  3. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  4. Computational modeling of a forward lunge

    DEFF Research Database (Denmark)

    Eriksen, Tine Alkjær; Wieland, Maja Rose; Andersen, Michael Skipper

    2012-01-01

    during forward lunging. Thus, the purpose of the present study was to establish a musculoskeletal model of the forward lunge to computationally investigate the complete mechanical force equilibrium of the tibia during the movement to examine the loading pattern of the cruciate ligaments. A healthy female...... was selected out of a group of healthy subjects, who all performed a forward lunge on a force platform, targeting a knee flexion angle of 90˚. Skin-markers were placed on anatomical landmarks on the subject and the movement was recorded by five video cameras. The three-dimensional kinematic data describing...... the forward lunge movement were extracted and used to develop a biomechanical model of the lunge movement. The model comprised two legs including femur, crus, rigid foot segments and the pelvis. Each leg had 35 independent muscle units, which were recruited according to a minimum fatigue criterion...

  5. Computer model for analyzing sodium cold traps

    Energy Technology Data Exchange (ETDEWEB)

    McPheeters, C C; Raue, D J

    1983-05-01

    A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions. The calculated pressure drop as a function of impurity mass content determines the capacity of the cold trap. The accuracy of the model was checked by comparing calculated mass distributions with experimentally determined mass distributions from literature publications and with results from our own cold trap experiments. The comparisons were excellent in all cases. A parametric study was performed to determine which design variables are most important in maximizing cold trap capacity.

  6. Plausible rice yield losses under future climate warming.

    Science.gov (United States)

    Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep

    2016-12-19

    Rice is the staple food for more than 50% of the world's population(1-3). Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K(-1). Local crop models give a similar sensitivity (-6.3 ± 0.4% K(-1)), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K(-1), respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K(-1)). The constraint implies a more negative response to warming (-8.3 ± 1.4% K(-1)) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K(-1)) (ref. 4). Our study suggests that without CO2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.

  7. A Graph Model for Imperative Computation

    CERN Document Server

    McCusker, Guy

    2009-01-01

    Scott's graph model is a lambda-algebra based on the observation that continuous endofunctions on the lattice of sets of natural numbers can be represented via their graphs. A graph is a relation mapping finite sets of input values to output values. We consider a similar model based on relations whose input values are finite sequences rather than sets. This alteration means that we are taking into account the order in which observations are made. This new notion of graph gives rise to a model of affine lambda-calculus that admits an interpretation of imperative constructs including variable assignment, dereferencing and allocation. Extending this untyped model, we construct a category that provides a model of typed higher-order imperative computation with an affine type system. An appropriate language of this kind is Reynolds's Syntactic Control of Interference. Our model turns out to be fully abstract for this language. At a concrete level, it is the same as Reddy's object spaces model, which was the first "...

  8. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  9. Computational modeling of Li-ion batteries

    Science.gov (United States)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-08-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  10. Computational modeling of Li-ion batteries

    Science.gov (United States)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-12-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  11. Modeling Reality - How Computers Mirror Life

    Science.gov (United States)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  12. COMPUTER MODELING OF EMBRYONIC MORTALITY AT CRIOCONSERVATION

    Directory of Open Access Journals (Sweden)

    Gorbunov,

    2016-08-01

    Full Text Available The purpose of the research was to determine the regularities of influence of mammalian embryos heterogeneity and effectiveness of cryoconservation steps on their viability by using the developed simulation model. The model is based on analytical expressions that reflect the main causes of embryonic mortality during in vitro and in vivo cultivation, crioconservation and embryo transplantation. Reduction of viability depends on a set of biological factors such as the animal special, donor and recipient state, quality of embryos, and of technological ones such as the efficiency of cryopreservation method, and embryo transplantation. Fulfilled computer experiment showed, that divergence of embryos viability depending on biological parameters variations changes in a range from 0 to 100%, whereas efficiency index of chosen technology has an inaccuracy about 1%. The comparative analysis of alternative technologies of embryos cryopreservation showed the maximum efficiency of stages of use of the cryoprotectant, freezing regime and in vitro and in vivo cultivation of biological object. The application of computer modeling gives an opportunity to reduce the range of embryos viability results, obtained in different experiments is many times, thereby to shorten the time, monetary costs and the slaughter of laboratory animals in obtaining reliable results.

  13. The semiosis of prayer and the creation of plausible fictional worlds

    Directory of Open Access Journals (Sweden)

    J. Peter Södergård

    1999-01-01

    Full Text Available Prayer and incantation can perhaps be said to be 'mechanisms' that promise that lack will be liquidated and that there is an unlimited signator, a father, or some other metaphysical creature, standing behind and legitimizing the discourse. A way of communicating with the Unlimited that is privileged by an interpretive community that read the prayers aloud and enacted the magical stage-scripts. These highly overlapping categories function as one of the most common subforms of religious discourse for the creation, actualization and maintenance of plausible fictional worlds. They are liminal and transitional mechanisms that manipulate an empirical reader to phase-shift from an actual world to a plausible, by being inscribed in a possible and fictional world, thus creating a model reader, that perceives and acts according to the plausible world outlined by a given interpretive community, and that hears god talking in voces magicae and in god-speaking silence.

  14. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    This work focuses on the development of a computer-aided modeling framework. The framework is a knowledge-based system that is built on a generic modeling language and structured on workflows for different modeling tasks. The overall objective is to support model developers and users to generate...... and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  15. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  16. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  17. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  18. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  19. Plausible Explanation of Quantization of Intrinsic Redshift from Hall Effect and Weyl Quantization

    Directory of Open Access Journals (Sweden)

    Smarandache F.

    2006-10-01

    Full Text Available Using phion condensate model as described by Moffat [1], we consider a plausible explanation of (Tifft intrinsic redshift quantization as described by Bell [6] as result of Hall effect in rotating frame. We also discuss another alternative to explain redshift quantization from the viewpoint of Weyl quantization, which could yield Bohr- Sommerfeld quantization.

  20. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  1. Computer modeling of thermoelectric generator performance

    Science.gov (United States)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  2. A computational model of motor neuron degeneration.

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations.

  3. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  4. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  5. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  6. Computational modeling reveals dendritic origins of GABA(A-mediated excitation in CA1 pyramidal neurons.

    Directory of Open Access Journals (Sweden)

    Naomi Lewin

    Full Text Available GABA is the key inhibitory neurotransmitter in the adult central nervous system, but in some circumstances can lead to a paradoxical excitation that has been causally implicated in diverse pathologies from endocrine stress responses to diseases of excitability including neuropathic pain and temporal lobe epilepsy. We undertook a computational modeling approach to determine plausible ionic mechanisms of GABA(A-dependent excitation in isolated post-synaptic CA1 hippocampal neurons because it may constitute a trigger for pathological synchronous epileptiform discharge. In particular, the interplay intracellular chloride accumulation via the GABA(A receptor and extracellular potassium accumulation via the K/Cl co-transporter KCC2 in promoting GABA(A-mediated excitation is complex. Experimentally it is difficult to determine the ionic mechanisms of depolarizing current since potassium transients are challenging to isolate pharmacologically and much GABA signaling occurs in small, difficult to measure, dendritic compartments. To address this problem and determine plausible ionic mechanisms of GABA(A-mediated excitation, we built a detailed biophysically realistic model of the CA1 pyramidal neuron that includes processes critical for ion homeostasis. Our results suggest that in dendritic compartments, but not in the somatic compartments, chloride buildup is sufficient to cause dramatic depolarization of the GABA(A reversal potential and dominating bicarbonate currents that provide a substantial current source to drive whole-cell depolarization. The model simulations predict that extracellular K(+ transients can augment GABA(A-mediated excitation, but not cause it. Our model also suggests the potential for GABA(A-mediated excitation to promote network synchrony depending on interneuron synapse location - excitatory positive-feedback can occur when interneurons synapse onto distal dendritic compartments, while interneurons projecting to the perisomatic

  7. Statistics, Computation, and Modeling in Cosmology

    Science.gov (United States)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  8. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  9. Computational Models to Synthesize Human Walking

    Institute of Scientific and Technical Information of China (English)

    Lei Ren; David Howard; Laurence Kenney

    2006-01-01

    The synthesis of human walking is of great interest in biomechanics and biomimetic engineering due to its predictive capabilities and potential applications in clinical biomechanics, rehabilitation engineering and biomimetic robotics. In this paper,the various methods that have been used to synthesize humanwalking are reviewed from an engineering viewpoint. This involves a wide spectrum of approaches, from simple passive walking theories to large-scale computational models integrating the nervous, muscular and skeletal systems. These methods are roughly categorized under four headings: models inspired by the concept of a CPG (Central Pattern Generator), methods based on the principles of control engineering, predictive gait simulation using optimisation, and models inspired by passive walking theory. The shortcomings and advantages of these methods are examined, and future directions are discussed in the context of providing insights into the neural control objectives driving gait and improving the stability of the predicted gaits. Future advancements are likely to be motivated by improved understanding of neural control strategies and the subtle complexities of the musculoskeletal system during human locomotion. It is only a matter of time before predictive gait models become a practical and valuable tool in clinical diagnosis, rehabilitation engineering and robotics.

  10. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  11. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  12. Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, P. T.

    2010-02-08

    This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

  13. Evaluation of Marine Corps Manpower Computer Simulation Model

    Science.gov (United States)

    2016-12-01

    MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL by Eric S. Anderson December 2016 Thesis Advisor: Arnold Buss Second Reader: Neil Rowe...Master’s thesis 4. TITLE AND SUBTITLE EVALUATION OF MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL 5. FUNDING NUMBERS ACCT: 622716 JON...overall end strength are maintained. To assist their mission, an agent-based computer simulation model was developed in the Java computer language

  14. Computational Granular Dynamics Models and Algorithms

    CERN Document Server

    Pöschel, Thorsten

    2005-01-01

    Computer simulations not only belong to the most important methods for the theoretical investigation of granular materials, but also provide the tools that have enabled much of the expanding research by physicists and engineers. The present book is intended to serve as an introduction to the application of numerical methods to systems of granular particles. Accordingly, emphasis is placed on a general understanding of the subject rather than on the presentation of the latest advances in numerical algorithms. Although a basic knowledge of C++ is needed for the understanding of the numerical methods and algorithms in the book, it avoids usage of elegant but complicated algorithms to remain accessible for those who prefer to use a different programming language. While the book focuses more on models than on the physics of granular material, many applications to real systems are presented.

  15. Modeling groundwater flow on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  16. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  17. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  18. Gravothermal Star Clusters - Theory and Computer Modelling

    Science.gov (United States)

    Spurzem, Rainer

    2010-11-01

    In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.

  19. A biologically plausible learning rule for the Infomax on recurrent neural networks.

    Science.gov (United States)

    Hayakawa, Takashi; Kaneko, Takeshi; Aoyagi, Toshio

    2014-01-01

    A fundamental issue in neuroscience is to understand how neuronal circuits in the cerebral cortex play their functional roles through their characteristic firing activity. Several characteristics of spontaneous and sensory-evoked cortical activity have been reproduced by Infomax learning of neural networks in computational studies. There are, however, still few models of the underlying learning mechanisms that allow cortical circuits to maximize information and produce the characteristics of spontaneous and sensory-evoked cortical activity. In the present article, we derive a biologically plausible learning rule for the maximization of information retained through time in dynamics of simple recurrent neural networks. Applying the derived learning rule in a numerical simulation, we reproduce the characteristics of spontaneous and sensory-evoked cortical activity: cell-assembly-like repeats of precise firing sequences, neuronal avalanches, spontaneous replays of learned firing sequences and orientation selectivity observed in the primary visual cortex. We further discuss the similarity between the derived learning rule and the spike timing-dependent plasticity of cortical neurons.

  20. “合情推理”辨析%Analysis of Plausible Reasoning

    Institute of Scientific and Technical Information of China (English)

    连四清; 方运加

    2012-01-01

    波利亚的“合情推理”模式引进我国数学课程标准后,就成了我国数学教育研究的关键词。然而,“合情推理”的科学性尚需考证:(1)它的中文意义不明确;(2)它不满足推理模式的客观性要求,存在明显的缺陷;(3)过分强调“合情推理模式”则是过分强调归纳推理和演绎推理的区别,容易割裂它们之间的关系。%After the model of "plausible inference" being introduced into the mathematics curriculum standards, it became a key word of the research on mathematics education in China. However, there are doubts on whether it is scientific. (1) Chinese meaning of plausible inference is ambiguous. (2) The plausible inference can not meet the objective requirement of the reasoning, which has obvious defects. (3) Overemphasizing the model of plausible inference would overemphasize the difference between deductive inference and inductive inference, and would dispart them.

  1. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  2. A Granular Computing Model Based on Tolerance relation

    Institute of Scientific and Technical Information of China (English)

    WANG Guo-yin; HU Feng; HUANG Hai; WU Yu

    2005-01-01

    Granular computing is a new intelligent computing theory based on partition of problem concepts. It is an important problem in Rough Set theory to process incomplete information systems directly. In this paper, a granular computing model based on tolerance relation for processing incomplete information systems is developed. Furthermore, a criteria condition for attribution necessity is proposed in this model.

  3. Application of computer simulated persons in indoor environmental modeling

    DEFF Research Database (Denmark)

    Topp, C.; Nielsen, P. V.; Sørensen, Dan Nørtoft

    2002-01-01

    Computer simulated persons are often applied when the indoor environment is modeled by computational fluid dynamics. The computer simulated persons differ in size, shape, and level of geometrical complexity, ranging from simple box or cylinder shaped heat sources to more humanlike models. Little...

  4. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  5. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  6. Evaluation of the efficacy and safety of rivaroxaban using a computer model for blood coagulation.

    Directory of Open Access Journals (Sweden)

    Rolf Burghaus

    Full Text Available Rivaroxaban is an oral, direct Factor Xa inhibitor approved in the European Union and several other countries for the prevention of venous thromboembolism in adult patients undergoing elective hip or knee replacement surgery and is in advanced clinical development for the treatment of thromboembolic disorders. Its mechanism of action is antithrombin independent and differs from that of other anticoagulants, such as warfarin (a vitamin K antagonist, enoxaparin (an indirect thrombin/Factor Xa inhibitor and dabigatran (a direct thrombin inhibitor. A blood coagulation computer model has been developed, based on several published models and preclinical and clinical data. Unlike previous models, the current model takes into account both the intrinsic and extrinsic pathways of the coagulation cascade, and possesses some unique features, including a blood flow component and a portfolio of drug action mechanisms. This study aimed to use the model to compare the mechanism of action of rivaroxaban with that of warfarin, and to evaluate the efficacy and safety of different rivaroxaban doses with other anticoagulants included in the model. Rather than reproducing known standard clinical measurements, such as the prothrombin time and activated partial thromboplastin time clotting tests, the anticoagulant benchmarking was based on a simulation of physiologically plausible clotting scenarios. Compared with warfarin, rivaroxaban showed a favourable sensitivity for tissue factor concentration inducing clotting, and a steep concentration-effect relationship, rapidly flattening towards higher inhibitor concentrations, both suggesting a broad therapeutic window. The predicted dosing window is highly accordant with the final dose recommendation based upon extensive clinical studies.

  7. A Packet Routing Model for Computer Networks

    Directory of Open Access Journals (Sweden)

    O. Osunade

    2012-05-01

    Full Text Available The quest for reliable data transmission in today’s computer networks and internetworks forms the basis for which routing schemes need be improved upon. The persistent increase in the size of internetwork leads to a dwindling performance of the present routing algorithms which are meant to provide optimal path for forwarding packets from one network to the other. A mathematical and analytical routing model framework is proposed to address the routing needs to a substantial extent. The model provides schemes typical of packet sources, queuing system within a buffer, links and bandwidth allocation and time-based bandwidth generator in routing chunks of packets to their destinations. Principal to the choice of link are such design considerations as least-congested link in a set of links, normalized throughput, mean delay and mean waiting time and the priority of packets in a set of prioritized packets. These performance metrics were targeted and the resultant outcome is a fair, load-balanced network.

  8. Computational modeling of acute myocardial infarction.

    Science.gov (United States)

    Sáez, P; Kuhl, E

    2016-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size.

  9. Computer modeling of complete IC fabrication process

    Science.gov (United States)

    Dutton, Robert W.

    1987-05-01

    The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.

  10. A Survey of Formal Models for Computer Security.

    Science.gov (United States)

    1981-09-30

    presenting the individual models. 6.1 Basic Concepts and Trends The finite state machine model for computation views a computer system as a finite...top-level specification. The simplest description of the top-level model for DSU is given by Walker, et al. [36]. It is a finite state machine model , with

  11. The Plausibility of a String Quartet Performance in Virtual Reality.

    Science.gov (United States)

    Bergstrom, Ilias; Azevedo, Sergio; Papiotis, Panos; Saldanha, Nuno; Slater, Mel

    2017-04-01

    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.

  12. Computational exploration of metaphor comprehension processes using a semantic space model.

    Science.gov (United States)

    Utsumi, Akira

    2011-03-01

    Recent metaphor research has revealed that metaphor comprehension involves both categorization and comparison processes. This finding has triggered the following central question: Which property determines the choice between these two processes for metaphor comprehension? Three competing views have been proposed to answer this question: the conventionality view (Bowdle & Gentner, 2005), aptness view (Glucksberg & Haught, 2006b), and interpretive diversity view (Utsumi, 2007); these views, respectively, argue that vehicle conventionality, metaphor aptness, and interpretive diversity determine the choice between the categorization and comparison processes. This article attempts to answer the question regarding which views are plausible by using cognitive modeling and computer simulation based on a semantic space model. In the simulation experiment, categorization and comparison processes are modeled in a semantic space constructed by latent semantic analysis. These two models receive word vectors for the constituent words of a metaphor and compute a vector for the metaphorical meaning. The resulting vectors can be evaluated according to the degree to which they mimic the human interpretation of the same metaphor; the maximum likelihood estimation determines which of the two models better explains the human interpretation. The result of the model selection is then predicted by three metaphor properties (i.e., vehicle conventionality, aptness, and interpretive diversity) to test the three views. The simulation experiment for Japanese metaphors demonstrates that both interpretive diversity and vehicle conventionality affect the choice between the two processes. On the other hand, it is found that metaphor aptness does not affect this choice. This result can be treated as computational evidence supporting the interpretive diversity and conventionality views.

  13. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  14. Computational and Modeling Strategies for Cell Motility

    Science.gov (United States)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  15. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  16. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  17. Flux-based transport enhancement as a plausible unifying mechanism for auxin transport in meristem development.

    Directory of Open Access Journals (Sweden)

    Szymon Stoma

    2008-10-01

    Full Text Available Plants continuously generate new organs through the activity of populations of stem cells called meristems. The shoot apical meristem initiates leaves, flowers, and lateral meristems in highly ordered, spiralled, or whorled patterns via a process called phyllotaxis. It is commonly accepted that the active transport of the plant hormone auxin plays a major role in this process. Current hypotheses propose that cellular hormone transporters of the PIN family would create local auxin maxima at precise positions, which in turn would lead to organ initiation. To explain how auxin transporters could create hormone fluxes to distinct regions within the plant, different concepts have been proposed. A major hypothesis, canalization, proposes that the auxin transporters act by amplifying and stabilizing existing fluxes, which could be initiated, for example, by local diffusion. This convincingly explains the organised auxin fluxes during vein formation, but for the shoot apical meristem a second hypothesis was proposed, where the hormone would be systematically transported towards the areas with the highest concentrations. This implies the coexistence of two radically different mechanisms for PIN allocation in the membrane, one based on flux sensing and the other on local concentration sensing. Because these patterning processes require the interaction of hundreds of cells, it is impossible to estimate on a purely intuitive basis if a particular scenario is plausible or not. Therefore, computational modelling provides a powerful means to test this type of complex hypothesis. Here, using a dedicated computer simulation tool, we show that a flux-based polarization hypothesis is able to explain auxin transport at the shoot meristem as well, thus providing a unifying concept for the control of auxin distribution in the plant. Further experiments are now required to distinguish between flux-based polarization and other hypotheses.

  18. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  20. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  1. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    Science.gov (United States)

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  2. Infinite Time Cellular Automata: A Real Computation Model

    CERN Document Server

    Givors, Fabien; Ollinger, Nicolas

    2010-01-01

    We define a new transfinite time model of computation, infinite time cellular automata. The model is shown to be as powerful than infinite time Turing machines, both on finite and infinite inputs; thus inheriting many of its properties. We then show how to simulate the canonical real computation model, BSS machines, with infinite time cellular automata in exactly \\omega steps.

  3. Learning Anatomy: Do New Computer Models Improve Spatial Understanding?

    Science.gov (United States)

    Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian

    1999-01-01

    Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)

  4. A simulation model of a star computer network

    CERN Document Server

    Gomaa, H

    1979-01-01

    A simulation model of the CERN (European Organization for Nuclear Research) SPS star computer network is described. The model concentrates on simulating the message handling computer, through which all messages in the network pass. The implementation of the model and its calibration are also described. (6 refs).

  5. Graph Partitioning Models for Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  6. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    Directory of Open Access Journals (Sweden)

    Zdenko Lončarić

    2009-12-01

    seedlings with highest mass and leaf area are produced using growing media with pH close to 6 and with EC lower than 2 dSm-1. It could be concluded that conductivity approx. 3 dSm-1 has inhibitory effect on lettuce if pH is about 7 or higher. The computer model shows that raising pH and EC resulted in decreasing growth which could be expressed as increasing stress index. The lettuce height as a function of pH and EC is incorporated into the model as stress function showing increase of lettuce height by lowering EC from 4 to 1 dSm-1or pH from 7.4 to 6. The highest growing media index (8.1 was determined for mixture of composted pig manure and peat (1:1, and lowest (2.3 for composted horse manure and peat (1:2.

  7. The complete guide to blender graphics computer modeling and animation

    CERN Document Server

    Blain, John M

    2014-01-01

    Smoothly Leads Users into the Subject of Computer Graphics through the Blender GUIBlender, the free and open source 3D computer modeling and animation program, allows users to create and animate models and figures in scenes, compile feature movies, and interact with the models and create video games. Reflecting the latest version of Blender, The Complete Guide to Blender Graphics: Computer Modeling & Animation, 2nd Edition helps beginners learn the basics of computer animation using this versatile graphics program. This edition incorporates many new features of Blender, including developments

  8. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    Science.gov (United States)

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  9. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    Directory of Open Access Journals (Sweden)

    Karl T Bates

    Full Text Available Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize

  10. Programming in Biomolecular Computation

    DEFF Research Database (Denmark)

    Hartmann, Lars; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...... in a strong sense: a universal algorithm exists, that is able to execute any program, and is not asymptotically inefficient. A prototype model has been implemented (for now in silico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level....

  11. Editorial: Modelling and computational challenges in granular materials

    NARCIS (Netherlands)

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss

  12. Reduced computational models of serotonin synthesis, release, and reuptake.

    Science.gov (United States)

    Flower, Gordon; Wong-Lin, KongFatt

    2014-04-01

    Multiscale computational models can provide systemic evaluation and prediction of neuropharmacological drug effects. To date, little computational modeling work has been done to bridge from intracellular to neuronal circuit level. A complex model that describes the intracellular dynamics of the presynaptic terminal of a serotonergic neuron has been previously proposed. By systematically perturbing the model's components, we identify the slow and fast dynamical components of the model, and the reduced slow or fast mode of the model is computationally significantly more efficient with accuracy not deviating much from the original model. The reduced fast-mode model is particularly suitable for incorporating into neurobiologically realistic spiking neuronal models, and hence for large-scale realistic computational simulations. We also develop user-friendly software based on the reduced models to allow scientists to rapidly test and predict neuropharmacological drug effects at a systems level.

  13. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  14. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  15. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  16. Computer modeling of a convective steam superheater

    Science.gov (United States)

    Trojan, Marcin

    2015-03-01

    Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.

  17. Computer modeling of a convective steam superheater

    Directory of Open Access Journals (Sweden)

    Trojan Marcin

    2015-03-01

    Full Text Available Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.

  18. Ablative Rocket Deflector Testing and Computational Modeling

    Science.gov (United States)

    Allgood, Daniel C.; Lott, Jeffrey W.; Raines, Nickey

    2010-01-01

    A deflector risk mitigation program was recently conducted at the NASA Stennis Space Center. The primary objective was to develop a database that characterizes the behavior of industry-grade refractory materials subjected to rocket plume impingement conditions commonly experienced on static test stands. The program consisted of short and long duration engine tests where the supersonic exhaust flow from the engine impinged on an ablative panel. Quasi time-dependent erosion depths and patterns generated by the plume impingement were recorded for a variety of different ablative materials. The erosion behavior was found to be highly dependent on the material s composition and corresponding thermal properties. For example, in the case of the HP CAST 93Z ablative material, the erosion rate actually decreased under continued thermal heating conditions due to the formation of a low thermal conductivity "crystallization" layer. The "crystallization" layer produced near the surface of the material provided an effective insulation from the hot rocket exhaust plume. To gain further insight into the complex interaction of the plume with the ablative deflector, computational fluid dynamic modeling was performed in parallel to the ablative panel testing. The results from the current study demonstrated that locally high heating occurred due to shock reflections. These localized regions of shock-induced heat flux resulted in non-uniform erosion of the ablative panels. In turn, it was observed that the non-uniform erosion exacerbated the localized shock heating causing eventual plume separation and reversed flow for long duration tests under certain conditions. Overall, the flow simulations compared very well with the available experimental data obtained during this project.

  19. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  20. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  1. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  2. The emerging role of cloud computing in molecular modelling.

    Science.gov (United States)

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Sticker DNA computer model--Part Ⅰ:Theory

    Institute of Scientific and Technical Information of China (English)

    XU Jin; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore it arouses attention and interest of scientists in many fields. In this paper, we will systematically analyze the theories and applications of the model, summarize other scientists' contributions in this field, and propose our research results. This paper is the theoretical portion of the sticker model on DNA computer, which includes the introduction of the basic model of sticker computing. Firstly, we systematically introduce the basic theories of classic models about sticker computing; Secondly, we discuss the sticker system which is an abstract computing model based on the sticker model and formal languages; Finally, extend and perfect the model, and present two types of models that are more extensive in the applications and more perfect in the theory than the past models: one is the so-called k-bit sticker model, the other is full-message sticker DNA computing model.

  4. Behavior computing modeling, analysis, mining and decision

    CERN Document Server

    2012-01-01

    Includes six case studies on behavior applications Presents new techniques for capturing behavior characteristics in social media First dedicated source of references for the theory and applications of behavior informatics and behavior computing

  5. Los Alamos CCS (Center for Computer Security) formal computer security model

    Energy Technology Data Exchange (ETDEWEB)

    Dreicer, J.S.; Hunteman, W.J. (Los Alamos National Lab., NM (USA))

    1989-01-01

    This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell-Lapadula abstract sets of objects and subjects. 5 refs.

  6. Analog models of computations \\& Effective Church Turing Thesis: Efficient simulation of Turing machines by the General Purpose Analog Computer

    CERN Document Server

    Pouly, Amaury; Graça, Daniel S

    2012-01-01

    \\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...

  7. Dissemination of computer skills among physicians: the infectious process model.

    Science.gov (United States)

    Quinn, F B; Hokanson, J A; McCracken, M M; Stiernberg, C M

    1984-08-01

    While the potential utility of computer technology to medicine is often acknowledged, little is known as to the best methods to actually teach physicians about computers. The current variability in physician computer fluency implies there is no accepted minimum required level of computer skills for physicians. Special techniques are needed to instill these skills in the physician and measure their effects within the medical profession. This hypothesis is suggested following the development of a specialized course for the new physician. In a population of physicians where medical computing usage was considered nonexistent, intense interest developed the following exposure to a role model having strong credentials in both medicine and computer science. This produced an atmosphere where there was a perceived benefit in being knowledgeable about the medical computer usage. The subsequent increase in computer systems use was the result of the availability of resources and development of computer skills that could be exchanged among the students and faculty. This growth in computer use is described using the parameters of an infectious process model. While other approaches may also be useful, the infectious process model permits the growth of medical computer usage to be quantitatively described, evaluates specific determinants of use patterns, and allows the future growth of computer utilization in medicine to be predicted.

  8. Integrated Multiscale Modeling of Molecular Computing Devices

    Energy Technology Data Exchange (ETDEWEB)

    Jerzy Bernholc

    2011-02-03

    photolithography will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.

  9. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  10. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Modeling Workflow Management in a Distributed Computing System Using Petri Nets. ... who use it to share information more rapidly and increases their productivity. ... Petri nets are an established tool for modelling and analyzing processes.

  11. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  12. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  13. World Knowledge in Computational Models of Discourse Comprehension

    Science.gov (United States)

    Frank, Stefan L.; Koppen, Mathieu; Noordman, Leo G. M.; Vonk, Wietske

    2008-01-01

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, "ad hoc" selection, extraction from…

  14. Flow Through a Laboratory Sediment Sample by Computer Simulation Modeling

    Science.gov (United States)

    2006-09-07

    Flow through a laboratory sediment sample by computer simulation modeling R.B. Pandeya’b*, Allen H. Reeda, Edward Braithwaitea, Ray Seyfarth0, J.F...through a laboratory sediment sample by computer simulation modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  15. Families of Plausible Solutions to the Puzzle of Boyajian's Star

    CERN Document Server

    Wright, Jason T

    2016-01-01

    Good explanations for the unusual light curve of Boyajian's Star have been hard to find. Recent results by Montet & Simon lend strength and plausibility to the conclusion of Schaefer that in addition to short-term dimmings, the star also experiences large, secular decreases in brightness on decadal timescales. This, combined with a lack of long-wavelength excess in the star's spectral energy distribution, strongly constrains scenarios involving circumstellar material, including hypotheses invoking a spherical cloud of artifacts. We show that the timings of the deepest dimmings appear consistent with being randomly distributed, and that the star's reddening and narrow sodium absorption is consistent with the total, long-term dimming observed. Following Montet & Simon's encouragement to generate alternative hypotheses, we attempt to circumscribe the space of possible explanations with a range of plausibilities, including: a cloud in the outer solar system, structure in the ISM, natural and artificial ma...

  16. Many-Task Computing Tools for Multiscale Modeling

    OpenAIRE

    Katz, Daniel S.; Ripeanu, Matei; Wilde, Michael

    2011-01-01

    This paper discusses the use of many-task computing tools for multiscale modeling. It defines multiscale modeling and places different examples of it on a coupling spectrum, discusses the Swift parallel scripting language, describes three multiscale modeling applications that could use Swift, and then talks about how the Swift model is being extended to cover more of the multiscale modeling coupling spectrum.

  17. Pervasive Computing Location-aware Model Based on Ontology

    Institute of Scientific and Technical Information of China (English)

    PU Fang; CAI Hai-bin; CAO Qi-ying; SUN Dao-qing; LI Tong

    2008-01-01

    In order to integrate heterogeneous location-aware systems into pervasive computing environment, a novel pervasive computing location-aware model based on ontology is presented. A location-aware model ontology (LMO) is constructed. The location-aware model has the capabilities of sharing knowledge, reasoning and adjusting the usage policies of services dynamically through a unified semantic location manner. At last, the work process of our proposed location-aware model is explained by an application scenario.

  18. Computational intelligence applications in modeling and control

    CERN Document Server

    Vaidyanathan, Sundarapandian

    2015-01-01

    The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...

  19. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    Within the last five to ten years we have experienced an incredible growth of ubiquitous technologies which has allowed for improvements in several areas, including energy distribution and management, health care services, border surveillance, secure monitoring and management of buildings......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...... in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...

  20. A DNA based model for addition computation

    Institute of Scientific and Technical Information of China (English)

    GAO Lin; YANG Xiao; LIU Wenbin; XU Jin

    2004-01-01

    Much effort has been made to solve computing problems by using DNA-an organic simulating method, which in some cases is preferable to the current electronic computer. However, No one at present has proposed an effective and applicable method to solve addition problem with molecular algorithm due to the difficulty in solving the carry problem which can be easily solved by hardware of an electronic computer. In this article, we solved this problem by employing two kinds of DNA strings, one is called result and operation string while the other is named carrier. The result and operation string contains some carry information by its own and denotes the ultimate result while the carrier is just for carrying use. The significance of this algorithm is the original code, the fairly easy steps to follow and the feasibility under current molecular biological technology.

  1. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2010-01-01

    executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); it is also uniform: new “hardware” is not needed to solve new problems; and (last but not least) it is Turing complete......Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We introduce a model of computation that is evidently programmable......, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only...

  2. A computational model of the human hand 93-ERI-053

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  3. The one-way quantum computer - a non-network model of quantum computation

    CERN Document Server

    Raussendorf, R; Briegel, H J; Raussendorf, Robert; Browne, Daniel E.; Briegel, Hans J.

    2001-01-01

    A one-way quantum computer works by only performing a sequence of one-qubit measurements on a particular entangled multi-qubit state, the cluster state. No non-local operations are required in the process of computation. Any quantum logic network can be simulated on the one-way quantum computer. On the other hand, the network model of quantum computation cannot explain all ways of processing quantum information possible with the one-way quantum computer. In this paper, two examples of the non-network character of the one-way quantum computer are given. First, circuits in the Clifford group can be performed in a single time step. Second, the realisation of a particular circuit --the bit-reversal gate-- on the one-way quantum computer has no network interpretation. (Submitted to J. Mod. Opt, Gdansk ESF QIT conference issue.)

  4. Spelling in oral deaf and hearing dyslexic children: A comparison of phonologically plausible errors.

    Science.gov (United States)

    Roy, P; Shergold, Z; Kyle, F E; Herman, R

    2014-11-01

    A written single word spelling to dictation test and a single word reading test were given to 68 severe-profoundly oral deaf 10-11-year-old children and 20 hearing children with a diagnosis of dyslexia. The literacy scores of the deaf children and the hearing children with dyslexia were lower than expected for children of their age and did not differ from each other. Three quarters of the spelling errors of hearing children with dyslexia compared with just over half the errors of the oral deaf group were phonologically plausible. Expressive vocabulary and speech intelligibility predicted the percentage of phonologically plausible errors in the deaf group only. Implications of findings for the phonological decoding self-teaching model and for supporting literacy development are discussed.

  5. Of paradox and plausibility: the dynamic of change in medical law.

    Science.gov (United States)

    Harrington, John

    2014-01-01

    This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes.

  6. A Swarm Intelligence Based Model for Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmed S. Salama

    2015-01-01

    Full Text Available Mobile Computing (MC provides multi services and a lot of advantages for millions of users across the world over the internet. Millions of business customers have leveraged cloud computing services through mobile devices to get what is called Mobile Cloud Computing (MCC. MCC aims at using cloud computing techniques for storage and processing of data on mobile devices, thereby reducing their limitations. This paper proposes architecture for a Swarm Intelligence Based Mobile Cloud Computing Model (SIBMCCM. A model that uses a proposed Parallel Particle Swarm Optimization (PPSO algorithm to enhance the access time for the mobile cloud computing services which support different E Commerce models and to better secure the communication through the mobile cloud and the mobile commerce transactions.

  7. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  8. Families of Plausible Solutions to the Puzzle of Boyajian’s Star

    Science.gov (United States)

    Wright, Jason T.; Sigurd̵sson, Steinn

    2016-09-01

    Good explanations for the unusual light curve of Boyajian's Star have been hard to find. Recent results by Montet & Simon lend strength and plausibility to the conclusion of Schaefer that in addition to short-term dimmings, the star also experiences large, secular decreases in brightness on decadal timescales. This, combined with a lack of long-wavelength excess in the star's spectral energy distribution, strongly constrains scenarios involving circumstellar material, including hypotheses invoking a spherical cloud of artifacts. We show that the timings of the deepest dimmings appear consistent with being randomly distributed, and that the star's reddening and narrow sodium absorption is consistent with the total, long-term dimming observed. Following Montet & Simon's encouragement to generate alternative hypotheses, we attempt to circumscribe the space of possible explanations with a range of plausibilities, including: a cloud in the outer solar system, structure in the interstellar medium (ISM), natural and artificial material orbiting Boyajian's Star, an intervening object with a large disk, and variations in Boyajian's Star itself. We find the ISM and intervening disk models more plausible than the other natural models.

  9. CHOREO: An Interactive Computer Model for Dance.

    Science.gov (United States)

    Savage, G. J.; Officer, J. M.

    1978-01-01

    Establishes the need for literacy in dance; and describes two dance notation systems: the Massine notation method, and the Labanotation method. The use of interactive computer graphics as a tool for both learning and interpreting dance notation is introduced. (Author/VT)

  10. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  11. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  12. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  13. Computer-aided design–computer-aided engineering associative feature-based heterogeneous object modeling

    Directory of Open Access Journals (Sweden)

    Jikai Liu

    2015-12-01

    Full Text Available Conventionally, heterogeneous object modeling methods paid limited attention to the concurrent modeling of geometry design and material composition distribution. Procedural method was normally employed to generate the geometry first and then determine the heterogeneous material distribution, which ignores the mutual influence. Additionally, limited capability has been established about irregular material composition distribution modeling with strong local discontinuities. This article overcomes these limitations by developing the computer-aided design–computer-aided engineering associative feature-based heterogeneous object modeling method. Level set functions are applied to model the geometry within computer-aided design module, which enables complex geometry modeling. Finite element mesh is applied to store the local material compositions within computer-aided engineering module, which allows any local discontinuities. Then, the associative feature concept builds the correspondence relationship between these modules. Additionally, the level set geometry and material optimization method are developed to concurrently generate the geometry and material information which fills the contents of the computer-aided design–computer-aided engineering associative feature model. Micro-geometry is investigated as well, instead of only the local material composition. A few cases are studied to prove the effectiveness of this new heterogeneous object modeling method.

  14. Computer modeling of ORNL storage tank sludge mobilization and mixing

    Energy Technology Data Exchange (ETDEWEB)

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  15. Macro—Dataflow Computational Model and Its Simulation

    Institute of Scientific and Technical Information of China (English)

    孙昱东; 谢志良

    1990-01-01

    This paper discusses the relationship between parallelism granularity and system overhead of dataflow computer systems,and indicates that a trade-off between them should be determined to obtain optimal efficiency of the overall system.On the basis of this discussion,a macro-dataflow computational model is established to exploit the task-level parallelism.Working as a macro-dataflow computer,an Experimental Distributed Dataflow Simulation System(EDDSS)is developed to examine the effectiveness of the macro-dataflow computational model.

  16. Computational modeling in melanoma for novel drug discovery.

    Science.gov (United States)

    Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco

    2016-06-01

    There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.

  17. Integrating Numerical Computation into the Modeling Instruction Curriculum

    CERN Document Server

    Caballero, Marcos D; Aiken, John M; Douglas, Scott S; Scanlon, Erin M; Thoms, Brian; Schatz, Michael F

    2012-01-01

    We describe a way to introduce physics high school students with no background in programming to computational problem-solving experiences. Our approach builds on the great strides made by the Modeling Instruction reform curriculum. This approach emphasizes the practices of "Developing and using models" and "Computational thinking" highlighted by the NRC K-12 science standards framework. We taught 9th-grade students in a Modeling-Instruction-based physics course to construct computational models using the VPython programming environment. Numerical computation within the Modeling Instruction curriculum provides coherence among the curriculum's different force and motion models, links the various representations which the curriculum employs, and extends the curriculum to include real-world problems that are inaccessible to a purely analytic approach.

  18. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  19. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    Science.gov (United States)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for

  20. Computer Modeling for Optical Waveguide Sensors.

    Science.gov (United States)

    1987-12-15

    COSATI CODES 18 SUBJECT TERMS (Continue on reverse it necessary and cleritify by DIock numnerl FIEL GRUP SB-GOUP Optical waveguide sensors Computer...reflection. The resultant probe beam transmission may be plotted as a function of changes in the refractive index of the surrounding fluid medium. BASIC...all angles of incidence about the critical angle ecr. It should be noted that N in equation (3) is a function of e, since = sin - l sin 8 , see

  1. Operation of the computer model for microenvironment atomic oxygen exposure

    Science.gov (United States)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  2. Introduction to computation and modeling for differential equations

    CERN Document Server

    Edsberg, Lennart

    2008-01-01

    An introduction to scientific computing for differential equationsIntroduction to Computation and Modeling for Differential Equations provides a unified and integrated view of numerical analysis, mathematical modeling in applications, and programming to solve differential equations, which is essential in problem-solving across many disciplines, such as engineering, physics, and economics. This book successfully introduces readers to the subject through a unique ""Five-M"" approach: Modeling, Mathematics, Methods, MATLAB, and Multiphysics. This approach facilitates a thorough understanding of h

  3. Computational modeling of induced emotion using GEMS

    NARCIS (Netherlands)

    Aljanaki, Anna; Wiering, Frans; Veltkamp, Remco

    2014-01-01

    Most researchers in the automatic music emotion recognition field focus on the two-dimensional valence and arousal model. This model though does not account for the whole diversity of emotions expressible through music. Moreover, in many cases it might be important to model induced (felt) emotion, r

  4. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  5. Markov Graph Model Computation and Its Application to Intrusion Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Markov model is usually selected as the base model of user action in the intrusion detection system (IDS). However, the performance of the IDS depends on the status space of Markov model and it will degrade as the space dimension grows. Here, Markov Graph Model (MGM) is proposed to handle this issue. Specification of the model is described, and several methods for probability computation with MGM are also presented. Based on MGM,algorithms for building user model and predicting user action are presented. And the performance of these algorithms such as computing complexity, prediction accuracy, and storage requirement of MGM are analyzed.

  6. Computational technology of multiscale modeling the gas flows in microchannels

    Science.gov (United States)

    Podryga, V. O.

    2016-11-01

    The work is devoted to modeling the gas mixture flows in engineering microchannels under conditions of many scales of computational domain. The computational technology of using the multiscale approach combining macro - and microscopic models is presented. At macrolevel the nature of the flow and the external influence on it are considered. As a model the system of quasigasdynamic equations is selected. At microlevel the correction of gasdynamic parameters and the determination of boundary conditions are made. As a numerical model the Newton's equations and the molecular dynamics method are selected. Different algorithm types used for implementation of multiscale modeling are considered. The results of the model problems for separate stages are given.

  7. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    Directory of Open Access Journals (Sweden)

    Jiunn-Woei Lian PhD

    2017-01-01

    Full Text Available The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  8. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    Science.gov (United States)

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  9. Mobile Cloud Computing: A Comparison of Application Models

    CERN Document Server

    Kovachev, Dejan; Klamma, Ralf

    2011-01-01

    Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can ...

  10. Numerical computations and mathematical modelling with infinite and infinitesimal numbers

    CERN Document Server

    Sergeyev, Yaroslav D

    2012-01-01

    Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this paper, a recently introduced computational methodology (that is not related to the non-standard analysis) is used to work with finite, infinite, and infinitesimal numbers \\textit{numerically}. This can be done on a new kind of a computer - the Infinity Computer - able to work with all these types of numbers. The new computational tools both give possibilities to execute computations of a new type and open new horizons for creating new mathematical models where a computational usage of infinite and/or infinitesimal numbers can be useful. A number of numerical examples showing the potential of the new approach and dealing with divergent series, limits, probability theory, linear algebra, and calculation of volumes of objects consisting of parts of different dimensions are given.

  11. The Validation of Computer-based Models in Engineering: Some Lessons from Computing Science

    Directory of Open Access Journals (Sweden)

    D. J. Murray-Smith

    2001-01-01

    Full Text Available Questions of the quality of computer-based models and the formal processes of model testing, involving internal verification and external validation, are usually given only passing attention in engineering reports and in technical publications. However, such models frequently provide a basis for analysis methods, design calculations or real-time decision-making in complex engineering systems. This paper reviews techniques used for external validation of computer-based models and contrasts the somewhat casual approach which is usually adopted in this field with the more formal approaches to software testing and documentation recommended for large software projects. Both activities require intimate knowledge of the intended application, a systematic approach and considerable expertise and ingenuity in the design of tests. It is concluded that engineering degree courses dealing with modelling techniques and computer simulation should put more emphasis on model limitations, testing and validation.

  12. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  13. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters

    Directory of Open Access Journals (Sweden)

    Pietro Cipresso

    2017-08-01

    Full Text Available Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  14. The Architectural Designs of a Nanoscale Computing Model

    Directory of Open Access Journals (Sweden)

    Mary M. Eshaghian-Wilner

    2004-08-01

    Full Text Available A generic nanoscale computing model is presented in this paper. The model consists of a collection of fully interconnected nanoscale computing modules, where each module is a cube of cells made out of quantum dots, spins, or molecules. The cells dynamically switch between two states by quantum interactions among their neighbors in all three dimensions. This paper includes a brief introduction to the field of nanotechnology from a computing point of view and presents a set of preliminary architectural designs for fabricating the nanoscale model studied.

  15. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  16. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  17. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  18. Model identification in computational stochastic dynamics using experimental modal data

    Science.gov (United States)

    Batou, A.; Soize, C.; Audebert, S.

    2015-01-01

    This paper deals with the identification of a stochastic computational model using experimental eigenfrequencies and mode shapes. In the presence of randomness, it is difficult to construct a one-to-one correspondence between the results provided by the stochastic computational model and the experimental data because of the random modes crossing and veering phenomena that may occur from one realization to another one. In this paper, this correspondence is constructed by introducing an adapted transformation for the computed modal quantities. Then the transformed computed modal quantities can be compared with the experimental data in order to identify the parameters of the stochastic computational model. The methodology is applied to a booster pump of thermal units for which experimental modal data have been measured on several sites.

  19. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  20. Computational modeling in cognitive science: a manifesto for change.

    Science.gov (United States)

    Addyman, Caspar; French, Robert M

    2012-07-01

    Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces.  For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals.

  1. Timing and expectation of reward: a neuro-computational model of the afferents to the ventral tegmental area

    Directory of Open Access Journals (Sweden)

    Julien eVitay

    2014-01-01

    Full Text Available Neural activity in dopaminergic areas such as the ventral tegmental area is influenced by timing processes, in particular by the temporal expectation of rewards during Pavlovian conditioning. Receipt of a reward at the expected time allows to compute reward-prediction errors which can drive learning in motor or cognitive structures. Reciprocally, dopamine plays an important role in the timing of external events. Several models of the dopaminergic system exist, but the substrate of temporal learning is rather unclear. In this article, we propose a neuro-computational model of the afferent network to the ventral tegmental area, including the lateral hypothalamus, the pedunculopontine nucleus, the amygdala, the ventromedial prefrontal cortex, the ventral basal ganglia (including the nucleus accumbens and the ventral pallidum, as well as the lateral habenula and the rostromedial tegmental nucleus. Based on a plausible connectivity and realistic learning rules, this neuro-computational model reproduces several experimental observations, such as the progressive cancellation of dopaminergic bursts at reward delivery, the appearance of bursts at the onset of reward-predicting cues or the influence of reward magnitude on activity in the amygdala and ventral tegmental area. While associative learning occurs primarily in the amygdala, learning of the temporal relationship between the cue and the associated reward is implemented as a dopamine-modulated coincidence detection mechanism in the nucleus accumbens.

  2. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  3. A Computational Trust Model for Collaborative Ventures

    Directory of Open Access Journals (Sweden)

    Weigang Wang

    2012-01-01

    Full Text Available Problem statement: The conceptual notion of trust and its underlying computational methods has been an important issue for researchers in electronic communities. While the independent trust evaluation is suitable in certain circumstances, such unilateral process falls short in supporting mutual evaluation between partners. Perceived reputation, the depth and breadth of trust, Trust Perception (TP, Repeat Collaborators at a Threshold (RCT and a collective trust index (c index have all been defined to specify the optimal trust criteria. Approach: By taking the evaluator’s own trust level as a threshold to identify compatible partners, a mutual balance between excess and deficiency in trust has been addressed. Since the number of repeated collaborations which signify retested confidence is more straightforward to capture than the manually provided feedback ratings, we have developed computational definitions for the above-mentioned concepts. Results and Conclusion: The results from the experiments based on the eBay dataset shows that the c index can be used to classify PowerSellers into normally distributed and comprehensible categories that can facilitate mutual evaluation.

  4. Computational Modeling of Turbulent Spray Combustion

    NARCIS (Netherlands)

    Ma, L.

    2016-01-01

    The objective of the research presented in this thesis is development and validation of predictive models or modeling approaches of liquid fuel combustion (spray combustion) in hot-diluted environments, known as flameless combustion or MILD combustion. The goal is to combine good physical insight,

  5. Computational Modeling of Turbulent Spray Combustion

    NARCIS (Netherlands)

    Ma, L.

    2016-01-01

    The objective of the research presented in this thesis is development and validation of predictive models or modeling approaches of liquid fuel combustion (spray combustion) in hot-diluted environments, known as flameless combustion or MILD combustion. The goal is to combine good physical insight, a

  6. A computational model of cardiovascular physiology and heart sound generation.

    Science.gov (United States)

    Watrous, Raymond L

    2009-01-01

    A computational model of the cardiovascular system is described which provides a framework for implementing and testing quantitative physiological models of heart sound generation. The lumped-parameter cardiovascular model can be solved for the hemodynamic variables on which the heart sound generation process is built. Parameters of the cardiovascular model can be adjusted to represent various normal and pathological conditions, and the acoustic consequences of those adjustments can be explored. The combined model of the physiology of cardiovascular circulation and heart sound generation has promise for application in teaching, training and algorithm development in computer-aided auscultation of the heart.

  7. Computational model for Halorhodopsin photocurrent kinetics

    Science.gov (United States)

    Bravo, Jaime; Stefanescu, Roxana; Talathi, Sachin

    2013-03-01

    Optogenetics is a rapidly developing novel optical stimulation technique that employs light activated ion channels to excite (using channelrhodopsin (ChR)) or suppress (using halorhodopsin (HR)) impulse activity in neurons with high temporal and spatial resolution. This technique holds enormous potential to externally control activity states in neuronal networks. The channel kinetics of ChR and HR are well understood and amenable for mathematical modeling. Significant progress has been made in recent years to develop models for ChR channel kinetics. To date however, there is no model to mimic photocurrents produced by HR. Here, we report the first model developed for HR photocurrents based on a four-state model of the HR photocurrent kinetics. The model provides an excellent fit (root-mean-square error of 3.1862x10-4, to an empirical profile of experimentally measured HR photocurrents. In combination, mathematical models for ChR and HR photocurrents can provide effective means to design test light based control systems to regulate neural activity, which in turn may have implications for the development of novel light based stimulation paradigms for brain disease control. I would like to thank the University of Florida and the Physics Research Experience for Undergraduates (REU) program, funded through NSF DMR-1156737. This research was also supported through start-up funds provided to Dr. Sachin Talathi

  8. Analysis of multi-domain hypothetical proteins containing iron-sulphur clusters and fad ligands reveal rieske dioxygenase activity suggesting their plausible roles in bioremediation.

    Science.gov (United States)

    Sathyanarayanan, Nitish; Nagendra, Holenarasipur Gundurao

    2012-01-01

    'Conserved hypothetical' proteins pose a challenge not just for functional genomics, but also to biology in general. As long as there are hundreds of conserved proteins with unknown function in model organisms such as Escherichia coli, Bacillus subtilis or Saccharomyces cerevisiae, any discussion towards a 'complete' understanding of these biological systems will remain a wishful thinking. Insilico approaches exhibit great promise towards attempts that enable appreciating the plausible roles of these hypothetical proteins. Among the majority of genomic proteins, two-thirds in unicellular organisms and more than 80% in metazoa, are multi-domain proteins, created as a result of gene duplication events. Aromatic ring-hydroxylating dioxygenases, also called Rieske dioxygenases (RDOs), are class of multi-domain proteins that catalyze the initial step in microbial aerobic degradation of many aromatic compounds. Investigations here address the computational characterization of hypothetical proteins containing Ferredoxin and Flavodoxin signatures. Consensus sequence of each class of oxidoreductase was obtained by a phylogenetic analysis, involving clustering methods based on evolutionary relationship. A synthetic sequence was developed by combining the consensus, which was used as the basis to search for their homologs via BLAST. The exercise yielded 129 multidomain hypothetical proteins containing both 2Fe-2S (Ferredoxin) and FNR (Flavodoxin) domains. In the current study, 40 proteins with N-terminus 2Fe-2S domain and C-terminus FNR domain are characterized, through homology modelling and docking exercises which suggest dioxygenase activity indicating their plausible roles in degradation of aromatic moieties.

  9. COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION

    Science.gov (United States)

    In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.

  10. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  11. PETRI NET MODELING OF COMPUTER VIRUS LIFE CYCLE

    African Journals Online (AJOL)

    Dr Obe

    dynamic system analysis is applied to model the virus life cycle. Simulation of the .... of successful applications of Petri nets include distributed database systems, communication protocols, .... Concepts and Design". McGraw-Hill Computer.

  12. An analysis of symbolic linguistic computing models in decision making

    Science.gov (United States)

    Rodríguez, Rosa M.; Martínez, Luis

    2013-01-01

    It is common that experts involved in complex real-world decision problems use natural language for expressing their knowledge in uncertain frameworks. The language is inherent vague, hence probabilistic decision models are not very suitable in such cases. Therefore, other tools such as fuzzy logic and fuzzy linguistic approaches have been successfully used to model and manage such vagueness. The use of linguistic information implies to operate with such a type of information, i.e. processes of computing with words (CWW). Different schemes have been proposed to deal with those processes, and diverse symbolic linguistic computing models have been introduced to accomplish the linguistic computations. In this paper, we overview the relationship between decision making and CWW, and focus on symbolic linguistic computing models that have been widely used in linguistic decision making to analyse if all of them can be considered inside of the CWW paradigm.

  13. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  14. Computational modelling in materials at the University of the North

    CSIR Research Space (South Africa)

    Ngoepe, PE

    2005-09-01

    Full Text Available The authors review computational modelling studies in materials resulting from the National Research Foundation-Royal Society collaboration. Initially, investigations were confined to transport and defect properties in fluorine and oxygen ion...

  15. Recent Applications of Hidden Markov Models in Computational Biology

    Institute of Scientific and Technical Information of China (English)

    Khar Heng Choo; Joo Chuan Tong; Louxin Zhang

    2004-01-01

    This paper examines recent developments and applications of Hidden Markov Models (HMMs) to various problems in computational biology, including multiple sequence alignment, homology detection, protein sequences classification, and genomic annotation.

  16. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    Science.gov (United States)

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  17. Towards diagnostic model calibration and evaluation: Approximate Bayesian computation

    NARCIS (Netherlands)

    Vrugt, J.A.; Sadegh, M.

    2013-01-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root w

  18. Thole's interacting polarizability model in computational chemistry practice

    NARCIS (Netherlands)

    deVries, AH; vanDuijnen, PT; Zijlstra, RWJ; Swart, M

    1997-01-01

    Thole's interacting polarizability model to calculate molecular polarizabilities from interacting atomic polarizabilities is reviewed and its major applications in computational chemistry are illustrated. The applications include prediction of molecular polarizabilities, use in classical expressions

  19. ON GLOBAL STABILITY OF A NONRESIDENT COMPUTER VIRUS MODEL

    Institute of Scientific and Technical Information of China (English)

    Yoshiaki MUROYA; Huaixing LI; Toshikazu KUNIYA

    2014-01-01

    In this paper, we establish new sufficient conditions for the infected equilibrium of a nonresident computer virus model to be globally asymptotically stable. Our results extend two kind of known results in recent literature.

  20. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  1. Emotion in Music: representation and computational modeling

    NARCIS (Netherlands)

    Aljanaki, A.|info:eu-repo/dai/nl/34570956X

    2016-01-01

    Music emotion recognition (MER) deals with music classification by emotion using signal processing and machine learning techniques. Emotion ontology for music is not well established yet. Musical emotion can be conceptualized through various emotional models: categorical, dimensional, or

  2. Computational Models for Analysis of Illicit Activities

    DEFF Research Database (Denmark)

    Nizamani, Sarwat

    Numerous illicit activities happen in our society, which, from time to time affect the population by harming individuals directly or indirectly. Researchers from different disciplines have contributed to developing strategies to analyze such activities, in order to help law enforcement agents....... These models include a model for analyzing evolution of terrorist networks; a text classification model for detecting suspicious text and identification of suspected authors of anonymous emails; and a semantic analysis model for news reports, which may help analyze the illicit activities in certain area...... with location and temporal information. For the network evolution, the hierarchical agglomerative clustering approach has been applied to terrorist networks as case studies. The networks' evolutions show that how individual actors who are initially isolated from each other are converted in small groups, which...

  3. Computational model of cellular metabolic dynamics

    DEFF Research Database (Denmark)

    Li, Yanjun; Solomon, Thomas; Haus, Jacob M

    2010-01-01

    : intracellular metabolite concentrations and patterns of glucose disposal. Model variations were simulated to investigate three alternative mechanisms to explain insulin enhancements: Model 1 (M.1), simple mass action; M.2, insulin-mediated activation of key metabolic enzymes (i.e., hexokinase, glycogen synthase......Identifying the mechanisms by which insulin regulates glucose metabolism in skeletal muscle is critical to understanding the etiology of insulin resistance and type 2 diabetes. Our knowledge of these mechanisms is limited by the difficulty of obtaining in vivo intracellular data. To quantitatively...... distinguish significant transport and metabolic mechanisms from limited experimental data, we developed a physiologically based, multiscale mathematical model of cellular metabolic dynamics in skeletal muscle. The model describes mass transport and metabolic processes including distinctive processes...

  4. Computer modelling of granular material microfracturing

    CSIR Research Space (South Africa)

    Malan, DF

    1995-08-15

    Full Text Available Microscopic observations indicate that intra- and transgranular fracturing are ubiquitous processes in the damage of rock fabrics. Extensive modelling of intergranular fracturing has been carried out previously using the distinct-element approach...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  6. Computational modelling of the impact of AIDS on business.

    Science.gov (United States)

    Matthews, Alan P

    2007-07-01

    An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.

  7. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  8. Theoretic computing model of combustion process of asphalt smoke

    Institute of Scientific and Technical Information of China (English)

    HUANG Rui; CHAI Li-yuan; HE De-wen; PENG Bing; WANG Yun-yan

    2005-01-01

    Based on the data and methods provided by research literature, dispersing mathematical model of combustion process of asphalt smoke is set by theoretic analysis. Through computer programming, the dynamic combustion process of asphalt smoke is calculated to simulate an experimental model. The computing result shows that the temperature and the concentration of asphalt smoke influence its burning temperature in approximatively linear manner. The consumed quantity of fuel to ignite the asphalt smoke needs to be measured from the two factors.

  9. A Situative Space Model for Mobile Mixed-Reality Computing

    DEFF Research Database (Denmark)

    Pederson, Thomas; Janlert, Lars-Erik; Surie, Dipak

    2011-01-01

    This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.......This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time....

  10. Language acquisition and implication for language change: A computational model.

    OpenAIRE

    Clark, Robert A.J.

    1997-01-01

    Computer modeling techniques, when applied to language acquisition problems, give an often unrealized insight into the diachronic change that occurs in language over successive generations. This paper shows that using assumptions about language acquisition to model successive generations of learners in a computer simulation, can have a drastic effect on the long term changes that occur in a language. More importantly, it shows that slight changes in the acquisition ...

  11. Cascade recursion models of computing the temperatures of underground layers

    Institute of Scientific and Technical Information of China (English)

    HAN; Liqun; BI; Siwen; SONG; Shixin

    2006-01-01

    An RBF neural network was used to construct computational models of the underground temperatures of different layers, using ground-surface parameters and the temperatures of various underground layers. Because series recursion models also enable researchers to use above-ground surface parameters to compute the temperatures of different underground layers, this method provides a new way of using thermal infrared remote sensing to monitor the suture zones of large areas of blocks and to research thermal anomalies in geologic structures.

  12. Mathematical and computational modeling in biology at multiple scales

    OpenAIRE

    Tuszynski, Jack A; Winter, Philip; White, Diana; Tseng, Chih-Yuan; Sahu, Kamlesh K.; Gentile, Francesco; Spasevska, Ivana; Omar, Sara Ibrahim; Nayebi, Niloofar; Churchill, Cassandra DM; Klobukowski, Mariusz; El-Magd, Rabab M Abou

    2014-01-01

    A variety of topics are reviewed in the area of mathematical and computational modeling in biology, covering the range of scales from populations of organisms to electrons in atoms. The use of maximum entropy as an inference tool in the fields of biology and drug discovery is discussed. Mathematical and computational methods and models in the areas of epidemiology, cell physiology and cancer are surveyed. The technique of molecular dynamics is covered, with special attention to force fields f...

  13. Models for the Discrete Berth Allocation Problem: A Computational Comparison

    DEFF Research Database (Denmark)

    Buhrkal, Katja; Zuglian, Sara; Røpke, Stefan

    In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe the three main models of the discrete dynamic berth allocation...... problem, improve the performance of one model, and, through extensive numerical tests, compare all models from a computational perspective. The results indicate that a generalized setpartitioning model outperforms all other existing models....

  14. Models for the discrete berth allocation problem: A computational comparison

    DEFF Research Database (Denmark)

    Buhrkal, Katja Frederik; Zuglian, Sara; Røpke, Stefan;

    2011-01-01

    In this paper we consider the problem of allocating arriving ships to discrete berth locations at container terminals. This problem is recognized as one of the most important processes for any container terminal. We review and describe three main models of the discrete dynamic berth allocation pr...... problem, improve the performance of one model, and, through extensive numerical tests, compare all models from a computational perspective. The results indicate that a generalized set-partitioning model outperforms all other existing models....

  15. Computer Models and Automata Theory in Biology and Medicine

    CERN Document Server

    Baianu, I C

    2004-01-01

    The applications of computers to biological and biomedical problem solving goes back to the very beginnings of computer science, automata theory [1], and mathematical biology [2]. With the advent of more versatile and powerful computers, biological and biomedical applications of computers have proliferated so rapidly that it would be virtually impossible to compile a comprehensive review of all developments in this field. Limitations of computer simulations in biology have also come under close scrutiny, and claims have been made that biological systems have limited information processing power [3]. Such general conjectures do not, however, deter biologists and biomedical researchers from developing new computer applications in biology and medicine. Microprocessors are being widely employed in biological laboratories both for automatic data acquisition/processing and modeling; one particular area, which is of great biomedical interest, involves fast digital image processing and is already established for rout...

  16. Plausible scenarios for the radiography profession in Sweden in 2025.

    Science.gov (United States)

    Björkman, B; Fridell, K; Tavakol Olofsson, P

    2017-11-01

    Radiography is a healthcare speciality with many technical challenges. Advances in engineering and information technology applications may continue to drive and be driven by radiographers. The world of diagnostic imaging is changing rapidly and radiographers must be proactive in order to survive. To ensure sustainable development, organisations have to identify future opportunities and threats in a timely manner and incorporate them into their strategic planning. Hence, the aim of this study was to analyse and describe plausible scenarios for the radiography profession in 2025. The study has a qualitative design with an inductive approach based on focus group interviews. The interviews were inspired by the Scenario-Planning method. Of the seven trends identified in a previous study, the radiographers considered two as the most uncertain scenarios that would have the greatest impact on the profession should they occur. These trends, labelled "Access to career advancement" and "A sufficient number of radiographers", were inserted into the scenario cross. The resulting four plausible future scenarios were: The happy radiographer, the specialist radiographer, the dying profession and the assembly line. It is suggested that "The dying profession" scenario could probably be turned in the opposite direction by facilitating career development opportunities for radiographers within the profession. Changing the direction would probably lead to a profession composed of "happy radiographers" who are specialists, proud of their profession and competent to carry out advanced tasks, in contrast to being solely occupied by "the assembly line". Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  17. Prebiotically plausible mechanisms increase compositional diversity of nucleic acid sequences.

    Science.gov (United States)

    Derr, Julien; Manapat, Michael L; Rajamani, Sudha; Leu, Kevin; Xulvi-Brunet, Ramon; Joseph, Isaac; Nowak, Martin A; Chen, Irene A

    2012-05-01

    During the origin of life, the biological information of nucleic acid polymers must have increased to encode functional molecules (the RNA world). Ribozymes tend to be compositionally unbiased, as is the vast majority of possible sequence space. However, ribonucleotides vary greatly in synthetic yield, reactivity and degradation rate, and their non-enzymatic polymerization results in compositionally biased sequences. While natural selection could lead to complex sequences, molecules with some activity are required to begin this process. Was the emergence of compositionally diverse sequences a matter of chance, or could prebiotically plausible reactions counter chemical biases to increase the probability of finding a ribozyme? Our in silico simulations using a two-letter alphabet show that template-directed ligation and high concatenation rates counter compositional bias and shift the pool toward longer sequences, permitting greater exploration of sequence space and stable folding. We verified experimentally that unbiased DNA sequences are more efficient templates for ligation, thus increasing the compositional diversity of the pool. Our work suggests that prebiotically plausible chemical mechanisms of nucleic acid polymerization and ligation could predispose toward a diverse pool of longer, potentially structured molecules. Such mechanisms could have set the stage for the appearance of functional activity very early in the emergence of life.

  18. Improved Computational Model of Grid Cells Based on Column Structure

    Institute of Scientific and Technical Information of China (English)

    Yang Zhou; Dewei Wu; Weilong Li; Jia Du

    2016-01-01

    To simulate the firing pattern of biological grid cells, this paper presents an improved computational model of grid cells based on column structure. In this model, the displacement along different directions is processed by modulus operation, and the obtained remainder is associated with firing rate of grid cell. Compared with the original model, the improved parts include that: the base of modulus operation is changed, and the firing rate in firing field is encoded by Gaussian⁃like function. Simulation validates that the firing pattern generated by the improved computational model is more consistent with biological characteristic than original model. Besides, the firing pattern is badly influenced by the cumulative positioning error, but the computational model can also generate the regularly hexagonal firing pattern when the real⁃time positioning results are modified.

  19. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  20. Computational model of miniature pulsating heat pipes.

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  1. Computational model of miniature pulsating heat pipes

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Mario J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Givler, Richard C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  2. Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.

    Science.gov (United States)

    De Benedetti, Pier G; Fanelli, Francesca

    2010-10-01

    Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.

  3. Global Stability of an Epidemic Model of Computer Virus

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2014-01-01

    Full Text Available With the rapid popularization of the Internet, computers can enter or leave the Internet increasingly frequently. In fact, no antivirus software can detect and remove all sorts of computer viruses. This implies that viruses would persist on the Internet. To better understand the spread of computer viruses in these situations, a new propagation model is established and analyzed. The unique equilibrium of the model is globally asymptotically stable, in accordance with the reality. A parameter analysis of the equilibrium is also conducted.

  4. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    -friendly system, which will make the model development process easier and faster and provide the way for unified and consistent model documentation. The modeller can use the template for their specific problem or to extend and/or adopt a model. This is based on the idea of model reuse, which emphasizes the use...... and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer...... aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation; 4) model transfer – export/import to/from other application for further extension and application – several types of formats, such as XML...

  5. Computer models to study uterine activation at labour.

    Science.gov (United States)

    Sharp, G C; Saunders, P T K; Norman, J E

    2013-11-01

    Improving our understanding of the initiation of labour is a major aim of modern obstetric research, in order to better diagnose and treat pregnant women in which the process occurs abnormally. In particular, increased knowledge will help us identify the mechanisms responsible for preterm labour, the single biggest cause of neonatal morbidity and mortality. Attempts to improve our understanding of the initiation of labour have been restricted by the inaccessibility of gestational tissues to study during pregnancy and at labour, and by the lack of fully informative animal models. However, computer modelling provides an exciting new approach to overcome these restrictions and offers new insights into uterine activation during term and preterm labour. Such models could be used to test hypotheses about drugs to treat or prevent preterm labour. With further development, an effective computer model could be used by healthcare practitioners to develop personalized medicine for patients on a pregnancy-by-pregnancy basis. Very promising work is already underway to build computer models of the physiology of uterine activation and contraction. These models aim to predict changes and patterns in uterine electrical excitation during term labour. There have been far fewer attempts to build computer models of the molecular pathways driving uterine activation and there is certainly scope for further work in this area. The integration of computer models of the physiological and molecular mechanisms that initiate labour will be particularly useful.

  6. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  7. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  8. Computational social network modeling of terrorist recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

  9. Computational modelling of buckling of woven fabrics

    CSIR Research Space (South Africa)

    Anandjiwala, RD

    2006-02-01

    Full Text Available generalized model of a plain woven fabric and subsequently for modifying Huang’s extension analysis. Although, Kang et al have utilized Huang’s bilinearity in their model, the obvious inconsistency of applying the classical beam theory to the textile problem... couple which influences the behaviour of textile materials, such as yarns and fabrics. This implies that M a = 0 and B = B*. When substituting these values in Equations (4) to (16) equations are obtained that are similar to the buckling of a strut...

  10. Scratch as a computational modelling tool for teaching physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-05-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling programs. In this article, we briefly discuss why Scratch could be a useful tool for computational modelling in the primary or secondary physics classroom, and we present practical examples of how it can be used to build a model.

  11. Assessment of weld thickness loss in offshore pipelines using computed radiography and computational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste - UEZO, Avenida Manuel Caldeira de Alvarenga, 1203, 23070-200, Rio de Janeiro, RJ (Brazil)], E-mail: scorrea@con.ufrj.br; Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Departamento de Geologia/IGEO, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Oliveira, D.F. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X. [PEN/COPPE-DNC/Poli-CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear, COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Marinho, C.; Camerini, C.S. [CENPES/PDEP/TMEC/PETROBRAS, Ilha do Fundao, Cidade Universitaria, 21949-900, Rio de Janeiro, RJ (Brazil)

    2009-10-15

    In order to guarantee the structural integrity of oil plants it is crucial to monitor the amount of weld thickness loss in offshore pipelines. However, in spite of its relevance, this parameter is very difficult to determine, due to both the large diameter of most pipes and the complexity of the multi-variable system involved. In this study, a computational modeling based on Monte Carlo MCNPX code is combined with computed radiography to estimate the weld thickness loss in large-diameter offshore pipelines. Results show that computational modeling is a powerful tool to estimate intensity variations in radiographic images generated by weld thickness variations, and it can be combined with computed radiography to assess weld thickness loss in offshore and subsea pipelines.

  12. Computational Modeling of Fluorescence Loss in Photobleaching

    DEFF Research Database (Denmark)

    Hansen, Christian Valdemar; Schroll, Achim; Wüstner, Daniel

    2015-01-01

    Fluorescence loss in photobleaching (FLIP) is a modern microscopy method for visualization of transport processes in living cells. Although FLIP is widespread, an automated reliable analysis of image data is still lacking. This paper presents a framework for modeling and simulation of FLIP...

  13. Electricity load modelling using computational intelligence

    NARCIS (Netherlands)

    Ter Borg, R.W.

    2005-01-01

    As a consequence of the liberalisation of the electricity markets in Europe, market players have to continuously adapt their future supply to match their customers' demands. This poses the challenge of obtaining a predictive model that accurately describes electricity loads, current in this thesis.

  14. A Computational Model of Spatial Development

    Science.gov (United States)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  15. Emotion in Music: representation and computational modeling

    NARCIS (Netherlands)

    Aljanaki, A.

    2016-01-01

    Music emotion recognition (MER) deals with music classification by emotion using signal processing and machine learning techniques. Emotion ontology for music is not well established yet. Musical emotion can be conceptualized through various emotional models: categorical, dimensional, or domain-spec

  16. Computational Failure Modeling of Lower Extremities

    Science.gov (United States)

    2012-01-01

    0.3 σc = 132 MPa c = 0.1 ρ = 1810 kg/m3 [15] Trabecular bone Elastic with maximum principle stress-based fracture model E = 300 MPa v = 0.45 σc...39762 1 SANDIA NATL LAB NANOSCALE AND REACTIVE PROCESSES S SCHUMACHER PO BOX 5800 MS 0836 ALBUQUERQUE NEW MEXICO 87185-0836

  17. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  18. Enabling Grid Computing resources within the KM3NeT computing model

    Science.gov (United States)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  19. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  20. PanDA's Role in ATLAS Computing Model Evolution

    CERN Document Server

    Maeno, T; The ATLAS collaboration

    2014-01-01

    During Run 1 at the Large Hadron Collider from 2009-2013, the ATLAS experiment successfully met the computing challenge of accumulating, managing and analyzing a volume of data now exceeding 140 PB, processed at over 100 sites around the world, and accessed by thousands of physicists. This accomplishment required nimbleness and flexibility in the distributed computing infrastructure, both hardware and software, as the operational computing model evolved during the run based on experience. A critical enabler for this evolution was PanDA, the ATLAS workload management system used for production and distributed analysis. PanDA's capabilities were utilized and extended to dynamically and intelligently distribute data and processing workloads across ATLAS resources based on data popularity and resource availability, thereby 'flattening' an originally hierarchical computing model, in order to use resources more efficiently. A new round of PanDA development now taking place will continue to evolve the model for bett...

  1. Computational challenges in modeling and simulating living matter

    Science.gov (United States)

    Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling

    2016-12-01

    Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.

  2. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    Science.gov (United States)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  3. Images as drivers of progress in cardiac computational modelling.

    Science.gov (United States)

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A; Bishop, Martin J; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2014-08-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved.

  4. Computer modeling of road bridge for simulation moving load

    Directory of Open Access Journals (Sweden)

    Miličić Ilija M.

    2016-01-01

    Full Text Available In this paper is shown computational modelling one span road structures truss bridge with the roadway on the upper belt of. Calculation models were treated as planar and spatial girders made up of 1D finite elements with applications for CAA: Tower and Bridge Designer 2016 (2nd Edition. The conducted computer simulations results are obtained for each comparison of the impact of moving load according to the recommendations of the two standards SRPS and AASHATO. Therefore, it is a variant of the bridge structure modeling application that provides Bridge Designer 2016 (2nd Edition identical modeled in an environment of Tower. As important information for the selection of a computer applications point out that the application Bridge Designer 2016 (2nd Edition we arent unable to treat the impacts moving load model under national standard - V600. .

  5. A novel computer simulation for modeling grain growth

    Energy Technology Data Exchange (ETDEWEB)

    Chen, L.Q. (Pennsylvania State Univ., University Park, PA (United States). Dept. of Materials Science and Engineering)

    1995-01-01

    In this paper, the author proposes a new computer simulation model for investigating grain growth kinetics, born from the recent work on the domain growth kinetics of a quenched system with many non-conserved order parameters. A key new feature of this model for studying grain growth is that the grain boundaries are diffuse, as opposed to previous meanfield and statistical theories and Monte-Carlo simulations which assumed that grain boundaries were sharp. Unlike the Monte-Carlo simulations in which grain boundaries are made up of kinks, grain boundaries in the continuum model are smooth. Below, he describes this model in detail, give prescriptions for computer simulation, and then present computer simulation results on a two-dimensional model system.

  6. Integrated Multiscale Modeling of Molecular Computing Devices

    Energy Technology Data Exchange (ETDEWEB)

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.

  7. Instability phenomena in plasticity: Modelling and computation

    Science.gov (United States)

    Stein, E.; Steinmann, P.; Miehe, C.

    1995-12-01

    We presented aspects and results related to the broad field of strain localization with special focus on large strain elastoplastic response. Therefore, we first re-examined issues related to the classification of discontinuities and the classical description of localization with a particular emphasis on an Eulerian geometric representation. We touched the problem of mesh objectivity and discussed results of a particular regularization method, namely the micropolar approach. Generally, regularization has to preserve ellipticity and to reflect the underlying physics. For example ductile materials have to be modelled including viscous effects whereas geomaterials are adequately described by the micropolar approach. Then we considered localization phenomena within solids undergoing large strain elastoplastic deformations. Here, we documented the influence of isotropic damage on the failure analysis. Next, the interesting influence of an orthotropic yield condition on the spatial orientation of localized zones has been studied. Finally, we investigated the localization condition for an algorithmic model of finite strain single crystal plasticity.

  8. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  9. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  10. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  11. Hearing dummies: individualized computer models of hearing impairment.

    Science.gov (United States)

    Panda, Manasa R; Lecluyse, Wendy; Tan, Christine M; Jürgens, Tim; Meddis, Ray

    2014-10-01

    Objective: Our aim was to explore the usage of individualized computer models to simulate hearing loss based on detailed psychophysical assessment and to offer hypothetical diagnoses of the underlying pathology. Individualized computer models of normal and impaired hearing were constructed and evaluated using the psychophysical data obtained from human listeners. Computer models of impaired hearing were generated to reflect the hypothesized underlying pathology (e.g. dead regions, outer hair cell dysfunction, or reductions in endocochlear potential). These models were evaluated in terms of their ability to replicate the original patient data. Auditory profiles were measured for two normal and five hearing-impaired listeners using a battery of three psychophysical tests (absolute thresholds, frequency selectivity, and compression). The individualized computer models were found to match the data. Useful fits to the impaired profiles could be obtained by changing only a single parameter in the model of normal hearing. Sometimes, however, it was necessary to include an additional dead region. The creation of individualized computer models of hearing loss can be used to simulate auditory profiles of impaired listeners and suggest hypotheses concerning the underlying peripheral pathology.

  12. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  13. Understanding Emergency Care Delivery through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2017-08-10

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This manuscript, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This manuscript discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo Simulation, System Dynamics modeling, Discrete-Event Simulation, and Agent Based Simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this manuscript, our goal is to enhance adoption of computer simulation, a set of methods which hold great promise in addressing emergency care organization and design challenges. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Computational modeling of nuclear thermal rockets

    Science.gov (United States)

    Peery, Steven D.

    1993-01-01

    The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.

  15. Computational Modeling of Lipid Metabolism in Yeast

    Directory of Open Access Journals (Sweden)

    Vera Schützhold

    2016-09-01

    Full Text Available Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes.Here, we present a object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner.The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism.

  16. Bedrock Channel and Cave Evolution Models Based on Computational Fluid Dynamics

    Science.gov (United States)

    Perne, M.; Covington, M. D.; Cooper, M.

    2014-12-01

    Models of bedrock channel cross-section evolution typically rely on simple approximations of boundary shear stress to calculate erosion rates across the channel. While such models provide a useful tool for gaining general insight into channel dynamics, they also exhibit a narrower range of behaviors than seen in nature and scale experiments. Recent computational advances enable use of computational fluid dynamics (CFD) to relax many of the assumptions used in these simple models by simulating the full 3D flow field and resulting erosion. We have developed a model of bedrock channel evolution at the reach scale, using CFD, that alternates flow simulation steps with channel evolution steps and evolves the channel in time according to shear stresses calculated from the CFD runs. Caves provide an ideal field setting for studying bedrock channel dynamics, because long records of incision are often preserved in the form of channel widths, meander patterns, and sculpted forms, such as scallops, that indicate flow velocity and direction. However, most existing numerical models of cave formation investigate processes on larger scales, treat conduits as simple shapes, such as cylinders, and deal with the early stages of speleogenesis when sediment transport and erosion mechanisms other than dissolution do not have to be taken into account. Therefore, initial applications of the CFD model focus on the dynamics of cave channels, and particularly on the controls of channel width. While discharge, base level, sediment supply, and the ratio of dissolution to mechanical erosion, are likely to play important roles in determining channel width, we lack a quantitative understanding for the importance of these various factors. Notches in passage walls are thought to result from lateral erosion during periods of increased sediment load when the bed is armored. Modeling is used to check the plausibility of this explanation, and examine whether other mechanisms may also produce notches

  17. A risk computation model for environmental restoration activities

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.B. Jr.; Strenge, D.L.; Buck, J.W.

    1991-01-01

    A risk computation model useful in environmental restoration activities was developed for the US Department of Energy (DOE). This model, the Multimedia Environmental Pollutant Assessment System (MEPAS), can be used to evaluate effects of potential exposures over a broad range of regulatory issues including radioactive carcinogenic, nonradioactive carcinogenic, and noncarcinogenic effects. MEPAS integrates risk computation components. Release, transport, dispersion, deposition, exposure, and uptake computations are linked in a single system for evaluation of air, surface water, ground water, and overland flow transport. MEPAS uses standard computation approaches. Whenever available and appropriate, US Environmental Protection Agency guidance and models were used to facilitate compatibility and acceptance. MEPAS is a computational tool that can be used at several phases of an environmental restoration effort. At a preliminary stage in problem characterization, potential problems can be prioritized. As more data become available, MEPAS can provide an estimate of baseline risks or evaluate environmental monitoring data. In the feasibility stage, MEPAS can compute risk from alternative remedies. However, MEPAS is not designed to replace a detailed risk assessment of the selected remedy. For major problems, it will be appropriate to use a more detailed, risk computation tool for a detailed, site-specific evaluation of the selected remedy. 15 refs., 2 figs.

  18. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  19. Advances in parallel computer technology for desktop atmospheric dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D. [Pacific Northwest National Lab., Richland, WA (United States); Allwine, K.J. [Allwine Enviornmental Serv., Richland, WA (United States)

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  20. Computer generation of structural models of amorphous Si and Ge

    Science.gov (United States)

    Wooten, F.; Winer, K.; Weaire, D.

    1985-04-01

    We have developed and applied a computer algorithm that generates realistic random-network models of a-Si with periodic boundary conditions. These are the first models to have correlation functions that show no serious deiscrepancy with experiment. The algorithm provides a much-needed systematic approach to model construction that can be used to generate models of a large class of amorphous materials.

  1. Vectorial Preisach-type model designed for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Stancu, Alexandru [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania)]. E-mail: alstancu@uaic.ro; Stoleriu, Laurentiu [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania); Andrei, Petru [Electrical and Computer Engineering, Florida State University, Tallahassee, FL (United States); Electrical and Computer Engineering, Florida A and M University, Tallahassee, FL (United States)

    2007-09-15

    Most of the hysteresis phenomenological models are scalar, while all the magnetization processes are vectorial. The vector models-phenomenological or micromagnetic (physical)-are time consuming and sometimes difficult to implement. In this paper, we introduce a new vector Preisach-type model that uses micromagnetic results to simulate the magnetic response of a system of several tens of thousands of pseudo-particles. The model has a modular structure that allows easy implementation for parallel computing.

  2. Paradox of integration -- a computational model

    CERN Document Server

    Krawczyk, Malgorzata J

    2016-01-01

    The paradoxical aspect of integration of a social group has been highlighted by Peter Blau (Exchange and Power in Social Life, Wiley and Sons, 1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  3. Paradox of integration-A computational model

    Science.gov (United States)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  4. PEMAHAMAN DASAR ANALISIS MODEL COMPUTABLE GENERAL EQUILIBRIUM (CGE

    Directory of Open Access Journals (Sweden)

    Mardiyah Hayati

    2013-11-01

    Full Text Available Simple paper about basic understanding of computable general equilibrium aimed to give basic understanding about CGE. It consist of history of CGE, assumption of CGE model, excess and lack of CGE model, and creation of simple CGE model for closed economy. CGE model is suitable to be used for seeing impact of new policy implementation. It is because CGE model use general equilibrium in which this theory of general equilibrium explaining about inter-relation among markets in the economy system. CGE model was introduced in 1960s known as Johansen model. Next, it is expanded into various models such as: ORANI Model, General Trade Analysis Project (GTAP Model, and Applied General Equilibrium (AGE Model. In Indonesia, there are CGE ORANI Model, Wayang, Indonesia-E3 and IRCGE. CGE Model is created by assumption of perfect competition. Consumer maximizes utility, producer maximizes profit, and company maximizes zero profit condition.

  5. Computer simulation modeling of abnormal behavior: a program approach.

    Science.gov (United States)

    Reilly, K D; Freese, M R; Rowe, P B

    1984-07-01

    A need for modeling abnormal behavior on a comprehensive, systematic basis exists. Computer modeling and simulation tools offer especially good opportunities to establish such a program of studies. Issues concern deciding which modeling tools to use, how to relate models to behavioral data, what level of modeling to employ, and how to articulate theory to facilitate such modeling. Four levels or types of modeling, two qualitative and two quantitative, are identified. Their properties are examined and interrelated to include illustrative applications to the study of abnormal behavior, with an emphasis on schizophrenia.

  6. Emerging Trends and Statistical Analysis in Computational Modeling in Agriculture

    Directory of Open Access Journals (Sweden)

    Sunil Kumar

    2015-03-01

    Full Text Available In this paper the authors have tried to describe emerging trend in computational modelling used in the sphere of agriculture. Agricultural computational modelling with the use of intelligence techniques for computing the agricultural output by providing minimum input data to lessen the time through cutting down the multi locational field trials and also the labours and other inputs is getting momentum. Development of locally suitable integrated farming systems (IFS is the utmost need of the day, particularly in India where about 95% farms are under small and marginal holding size. Optimization of the size and number of the various enterprises to the desired IFS model for a particular set of agro-climate is essential components of the research to sustain the agricultural productivity for not only filling the stomach of the bourgeoning population of the country, but also to enhance the nutritional security and farms return for quality life. Review of literature pertaining to emerging trends in computational modelling applied in field of agriculture is done and described below for the purpose of understanding its trends mechanism behavior and its applications. Computational modelling is increasingly effective for designing and analysis of the system. Computa-tional modelling is an important tool to analyses the effect of different scenarios of climate and management options on the farming systems and its interaction among themselves. Further, authors have also highlighted the applications of computational modeling in integrated farming system, crops, weather, soil, climate, horticulture and statistical used in agriculture which can show the path to the agriculture researcher and rural farming community to replace some of the traditional techniques.

  7. Revisions to the hydrogen gas generation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  8. TsuPy: Computational robustness in Tsunami hazard modelling

    Science.gov (United States)

    Schäfer, Andreas M.; Wenzel, Friedemann

    2017-05-01

    Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.

  9. On the biological plausibility of Wind Turbine Syndrome.

    Science.gov (United States)

    Harrison, Robert V

    2015-01-01

    An emerging environmental health issue relates to potential ill-effects of wind turbine noise. There have been numerous suggestions that the low-frequency acoustic components in wind turbine signals can cause symptoms associated with vestibular system disorders, namely vertigo, nausea, and nystagmus. This constellation of symptoms has been labeled as Wind Turbine Syndrome, and has been identified in case studies of individuals living close to wind farms. This review discusses whether it is biologically plausible for the turbine noise to stimulate the vestibular parts of the inner ear and, by extension, cause Wind Turbine Syndrome. We consider the sound levels that can activate the semicircular canals or otolith end organs in normal subjects, as well as in those with preexisting conditions known to lower vestibular threshold to sound stimulation.

  10. Hamiltonian formulation of time-dependent plausible inference

    CERN Document Server

    Davis, Sergio

    2014-01-01

    Maximization of the path information entropy is a clear prescription for performing time-dependent plausible inference. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the Second Law of Thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the Second Law is a fundamental property of ...

  11. Alkaloids from Pandanus amaryllifolius: Isolation and Their Plausible Biosynthetic Formation.

    Science.gov (United States)

    Tsai, Yu-Chi; Yu, Meng-Lun; El-Shazly, Mohamed; Beerhues, Ludger; Cheng, Yuan-Bin; Chen, Lei-Chin; Hwang, Tsong-Long; Chen, Hui-Fen; Chung, Yu-Ming; Hou, Ming-Feng; Wu, Yang-Chang; Chang, Fang-Rong

    2015-10-23

    Pandanus amaryllifolius Roxb. (Pandanaceae) is used as a flavor and in folk medicine in Southeast Asia. The ethanolic crude extract of the aerial parts of P. amaryllifolius exhibited antioxidant, antibiofilm, and anti-inflammatory activities in previous studies. In the current investigation, the purification of the ethanolic extract yielded nine new compounds, including N-acetylnorpandamarilactonines A (1) and B (2); pandalizines A (3) and B (4); pandanmenyamine (5); pandamarilactones 2 (6) and 3 (7), and 5(E)-pandamarilactonine-32 (8); and pandalactonine (9). The isolated alkaloids, with either a γ-alkylidene-α,β-unsaturated-γ-lactone or γ-alkylidene-α,β-unsaturated-γ-lactam system, can be classified into five skeletons including norpandamarilactonine, indolizinone, pandanamine, pandamarilactone, and pandamarilactonine. A plausible biosynthetic route toward 1-5, 7, and 9 is proposed.

  12. Computational Models for Analysis of Illicit Activities

    DEFF Research Database (Denmark)

    Nizamani, Sarwat

    Numerous illicit activities happen in our society, which, from time to time affect the population by harming individuals directly or indirectly. Researchers from different disciplines have contributed to developing strategies to analyze such activities, in order to help law enforcement agents...... traditional models for both of the tasks. Apart from these globally organized crimes and cybercrimes, there happen specific world issues which affect geographic locations and take the form of bursts of public violence. These kinds of issues have received little attention by the academicians. These issues have...... to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...

  13. Computational Modeling of Cell Survival Using VHDL

    Directory of Open Access Journals (Sweden)

    Shruti Jain1,

    2010-01-01

    Full Text Available The model for cell survival has been implemented using VeryHigh Speed Integrated Circuit Hardware DescriptionLanguage (VHDL (Xilinx Tool taking three input signals:Tumor necrosis factor-α (TNF, Epidermal growth factor(EGF and Insulin. Cell survival has been regulated by theinteraction of five proteins viz P13K, TNFR1, EGFR, IRS andIKK in a network. In the absence of any one, in protein networkleads to cell death. For the EGF input signal the proteins likeMEK, ERK, AkT, Rac & JNK have been important forregulation of cell survival. Similarly for TNF and Insulin inputsignal proteins like NFκB, AkT, XIAP, JNK, MAP3K & MK2and MEK, ERK, AkT, Rac, mTOR & JNK respectively havebeen important for regulation of cell survival.

  14. Computer Forensics Field Triage Process Model

    Directory of Open Access Journals (Sweden)

    Marcus K. Rogers

    2006-06-01

    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  15. Predicting room acoustical behavior with the ODEON computer model

    DEFF Research Database (Denmark)

    Naylor, Graham; Rindel, Jens Holger

    1992-01-01

    for discrepancies are discussed. These discrepancies indicate areas in which the computational model has to be improved, and highlight some shortcomings of current room acoustical survey methods. The effects of various calculation parameters (e.g., number of rays, early reflection order) are also briefly considered.......The computational bases of the ODEON model for room acoustics are described in a companion paper. The model is implemented for general use of a PC. In this paper, various technical features of the program relevant to the acoustical design process are presented. These include interactive...

  16. Generating Turing Machines by Use of Other Computation Models

    Directory of Open Access Journals (Sweden)

    Leszek Dubiel

    2003-01-01

    Full Text Available For each problem that can be solved there exists algorithm, which can be described with a program of Turing machine. Because this is very simple model programs tend to be very complicated and hard to analyse by human. The best practice to solve given type of problems is to define a new model of computation that allows for quick and easy programming, and then to emulate its operation with Turing machine. This article shows how to define most suitable model for computation on natural numbers and defines Turing machine that emulates its operation.

  17. A propagation model of computer virus with nonlinear vaccination probability

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi

    2014-01-01

    This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.

  18. A Separated Domain-Based Kernel Model for Trusted Computing

    Institute of Scientific and Technical Information of China (English)

    FANG Yanxiang; SHEN Changxiang; XU Jingdong; WU Gongyi

    2006-01-01

    This paper fist gives an investigation on trusted computing on mainstream operation system (OS). Based on the observations, it is pointed out that Trusted Computing cannot be achieved due to the lack of separation mechanism of the components in mainstream OS. In order to provide a kind of separation mechanism, this paper proposes a separated domain-based kernel model (SDBKM), and this model is verified by non-interference theory. By monitoring and simplifying the trust dependence between domains, this model can solve problems in trust measurement such as deny of service (DoS) attack, Host security, and reduce the overhead of measurement.

  19. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  20. Dynamic Distribution Model with Prime Granularity for Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Dynamic distribution model is one of the best schemes for parallel volume rendering. However, in homogeneous cluster system, since the granularity is traditionally identical, all processors communicate almost simultaneously and computation load may lose balance. Due to problems above, a dynamic distribution model with prime granularity for parallel computing is presented.Granularities of each processor are relatively prime, and related theories are introduced. A high parallel performance can be achieved by minimizing network competition and using a load balancing strategy that ensures all processors finish almost simultaneously. Based on Master-Slave-Gleaner (MSG) scheme, the parallel Splatting Algorithm for volume rendering is used to test the model on IBM Cluster 1350 system. The experimental results show that the model can bring a considerable improvement in performance, including computation efficiency, total execution time, speed, and load balancing.

  1. Computer-Aided Template for Model Reuse, Development and Maintenance

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2014-01-01

    A template-based approach for model development is presented in this work. Based on a model decomposition technique, the computer-aided template concept has been developed. This concept is implemented as a software tool , which provides a user-friendly interface for following the workflow steps, ...

  2. Operation of the computer model for microenvironment solar exposure

    Science.gov (United States)

    Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.

  3. A Multi-Agent Immunology Model for Security Computer

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a computer immunology model for computersecurity , whose main components are defined as idea of Multi-Agent. It introduces the n at ural immune system on the principle, discusses the idea and characteristics of Mu lti-Agent. It gives a system model, and describes the structure and function of each agent. Also, the communication method between agents is described.

  4. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas

    2016-01-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To addr......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging....... To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns...

  5. Development of computer simulation models for pedestrian subsystem impact tests

    NARCIS (Netherlands)

    Kant, R.; Konosu, A.; Ishikawa, H.

    2000-01-01

    The European Enhanced Vehicle-safety Committee (EEVC/WG10 and WG17) proposed three component subsystem tests for cars to assess pedestrian protection. The objective of this study is to develop computer simulation models of the EEVC pedestrian subsystem tests. These models are available to develop a

  6. Interrogative Model of Inquiry and Computer-Supported Collaborative Learning.

    Science.gov (United States)

    Hakkarainen, Kai; Sintonen, Matti

    2002-01-01

    Examines how the Interrogative Model of Inquiry (I-Model), developed for the purposes of epistemology and philosophy of science, could be applied to analyze elementary school students' process of inquiry in computer-supported learning. Suggests that the interrogative approach to inquiry can be productively applied for conceptualizing inquiry in…

  7. Modeling and Computer Simulation of AN Insurance Policy:

    Science.gov (United States)

    Acharyya, Muktish; Acharyya, Ajanta Bhowal

    We have developed a model for a life-insurance policy. In this model, the net gain is calculated by computer simulation for a particular type of lifetime distribution function. We observed that the net gain becomes maximum for a particular value of upper age for last premium.

  8. Modelling Emission from Building Materials with Computational Fluid Dynamics

    DEFF Research Database (Denmark)

    Topp, Claus; Nielsen, Peter V.; Heiselberg, Per

    This paper presents a numerical model that by means of computational fluid dynamics (CFD) is capable of dealing with both pollutant transport across the boundary layer and internal diffusion in the source without prior knowledge of which is the limiting process. The model provides the concentration...

  9. Revisions to the hydrogen gas generation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  10. Computational modeling of aerosol deposition in respiratory tract: a review.

    Science.gov (United States)

    Rostami, Ali A

    2009-02-01

    This review article is intended to serve as an overview of the current status of the computational tools and approaches available for predicting respiratory-tract dosimetry of inhaled particulate matter. There are two groups of computational models available, depending on the intended use. The whole-lung models are designed to provide deposition prediction for the whole lung, from the oronasal cavities to the pulmonary region. The whole-lung models are generally semi-empirical and hence provide more reliable results but within the range of parameters used for empirical correlations. The local deposition or computational fluid dynamics (CFD)-based models, on the other hand, utilize comprehensive theoretical and computational approaches but are often limited to upper respiratory tracts. They are based on theoretical principles and are applicable to a wider range of parameters, but less accurate. One of the difficulties with modeling of aerosol deposition in human lung is related to the complexity of the airways geometry and the limited morphometric data available. Another difficulty corresponds to simulation of the realistic physiological conditions of lung environment. Furthermore, complex physical and chemical phenomena associated with dense and multicomponent aerosols complicate the modeling tasks. All of these issues are addressed in this review. The progress made in each area in the last three decades and the challenges ahead are discussed along with some suggestions for future direction. The following subjects are covered in this review: introduction, aerosol deposition mechanisms, elements of a computational model, respiratory-tract geometry models, whole-lung models, CFD based models, cigarette smoke deposition models, and conclusion.

  11. Basic definitions for discrete modeling of computer worms epidemics

    Directory of Open Access Journals (Sweden)

    P. Guevara

    2015-04-01

    Full Text Available The information technologies have evolved in such a way that communication between computers or hosts has become common, so much that the worldwide organization (governments and corporations depends on it; what could happen if these computers stop working for a long time is catastrophic. Unfortunately, networks are attacked by malware such as viruses and worms that could collapse the system. This has served as motivation for the formal study of computer worms and epidemics to develop strategies for prevention and protection; this is why in this paper, before analyzing epidemiological models, a set of formal definitions based on set theory and functions is proposed for describing 21 concepts used in the study of worms. These definitions provide a basis for future qualitative research on the behavior of computer worms, and quantitative for the study of their epidemiological models.

  12. A Novel Computer Virus Propagation Model under Security Classification

    Directory of Open Access Journals (Sweden)

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  13. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  14. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.

  15. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhar [Univ. of Pittsburgh, PA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.

  16. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhur [Univ. of Pittsburgh, PA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.

  17. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhur [Univ. of California, Berkeley, CA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity

  18. Phase Computations and Phase Models for Discrete Molecular Oscillators.

    OpenAIRE

    Demir, Alper; Şuvak, Önder

    2012-01-01

    RESEARCH Open Access Phase computations and phase models for discrete molecular oscillators Onder Suvak* and Alper Demir Abstract Background: Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for ...

  19. Special Issue: Big data and predictive computational modeling

    Science.gov (United States)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  20. Integrated Multiscale Modeling of Molecular Computing Devices

    Energy Technology Data Exchange (ETDEWEB)

    Weinan E

    2012-03-29

    The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.