WorldWideScience

Sample records for theoretical computer model

  1. Category-theoretic models of algebraic computer systems

    Science.gov (United States)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  2. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  3. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  4. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    Science.gov (United States)

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-03-02

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. © 2018 Cognitive Science Society, Inc.

  5. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  6. A Representation-Theoretic Approach to Reversible Computation with Applications

    DEFF Research Database (Denmark)

    Maniotis, Andreas Milton

    Reversible computing is a sub-discipline of computer science that helps to understand the foundations of the interplay between physics, algebra, and logic in the context of computation. Its subjects of study are computational devices and abstract models of computation that satisfy the constraint ......, there is still no uniform and consistent theory that is general in the sense of giving a model-independent account to the field....... of information conservation. Such machine models, which are known as reversible models of computation, have been examined both from a theoretical perspective and from an engineering perspective. While a bundle of many isolated successful findings and applications concerning reversible computing exists...

  7. Theoretical Computer Science

    DEFF Research Database (Denmark)

    2002-01-01

    The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...

  8. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  9. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  10. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  11. Modeling Multibody Systems with Uncertainties. Part I: Theoretical and Computational Aspects

    International Nuclear Information System (INIS)

    Sandu, Adrian; Sandu, Corina; Ahmadian, Mehdi

    2006-01-01

    This study explores the use of generalized polynomial chaos theory for modeling complex nonlinear multibody dynamic systems in the presence of parametric and external uncertainty. The polynomial chaos framework has been chosen because it offers an efficient computational approach for the large, nonlinear multibody models of engineering systems of interest, where the number of uncertain parameters is relatively small, while the magnitude of uncertainties can be very large (e.g., vehicle-soil interaction). The proposed methodology allows the quantification of uncertainty distributions in both time and frequency domains, and enables the simulations of multibody systems to produce results with 'error bars'. The first part of this study presents the theoretical and computational aspects of the polynomial chaos methodology. Both unconstrained and constrained formulations of multibody dynamics are considered. Direct stochastic collocation is proposed as less expensive alternative to the traditional Galerkin approach. It is established that stochastic collocation is equivalent to a stochastic response surface approach. We show that multi-dimensional basis functions are constructed as tensor products of one-dimensional basis functions and discuss the treatment of polynomial and trigonometric nonlinearities. Parametric uncertainties are modeled by finite-support probability densities. Stochastic forcings are discretized using truncated Karhunen-Loeve expansions. The companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part II: Numerical Applications' illustrates the use of the proposed methodology on a selected set of test problems. The overall conclusion is that despite its limitations, polynomial chaos is a powerful approach for the simulation of multibody systems with uncertainties

  12. International Conference on Theoretical and Computational Physics

    CERN Document Server

    2016-01-01

    Int'l Conference on Theoretical and Computational Physics (TCP 2016) will be held from August 24 to 26, 2016 in Xi'an, China. This Conference will cover issues on Theoretical and Computational Physics. It dedicates to creating a stage for exchanging the latest research results and sharing the advanced research methods. TCP 2016 will be an important platform for inspiring international and interdisciplinary exchange at the forefront of Theoretical and Computational Physics. The Conference will bring together researchers, engineers, technicians and academicians from all over the world, and we cordially invite you to take this opportunity to join us for academic exchange and visit the ancient city of Xi’an.

  13. Accelerator simulation and theoretical modelling of radiation effects (SMoRE)

    CERN Document Server

    2018-01-01

    This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...

  14. Computational Biomechanics Theoretical Background and BiologicalBiomedical Problems

    CERN Document Server

    Tanaka, Masao; Nakamura, Masanori

    2012-01-01

    Rapid developments have taken place in biological/biomedical measurement and imaging technologies as well as in computer analysis and information technologies. The increase in data obtained with such technologies invites the reader into a virtual world that represents realistic biological tissue or organ structures in digital form and allows for simulation and what is called “in silico medicine.” This volume is the third in a textbook series and covers both the basics of continuum mechanics of biosolids and biofluids and the theoretical core of computational methods for continuum mechanics analyses. Several biomechanics problems are provided for better understanding of computational modeling and analysis. Topics include the mechanics of solid and fluid bodies, fundamental characteristics of biosolids and biofluids, computational methods in biomechanics analysis/simulation, practical problems in orthopedic biomechanics, dental biomechanics, ophthalmic biomechanics, cardiovascular biomechanics, hemodynamics...

  15. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  16. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  17. Information-theoretic temporal Bell inequality and quantum computation

    International Nuclear Information System (INIS)

    Morikoshi, Fumiaki

    2006-01-01

    An information-theoretic temporal Bell inequality is formulated to contrast classical and quantum computations. Any classical algorithm satisfies the inequality, while quantum ones can violate it. Therefore, the violation of the inequality is an immediate consequence of the quantumness in the computation. Furthermore, this approach suggests a notion of temporal nonlocality in quantum computation

  18. Exploring Theoretical Computer Science Using Paper Toys (for kids)

    DEFF Research Database (Denmark)

    Valente, Andrea

    2004-01-01

    In this paper we propose the structure of an exploratory course in theoretical computer science intended for a broad range of students (and especially kids). The course is built on computational cards, a simple paper toy, in which playing cards are computational elements; computing machines can...

  19. Mathematical challenges from theoretical/computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-31

    The committee believes that this report has relevance and potentially valuable suggestions for a wide range of readers. Target audiences include: graduate departments in the mathematical and chemical sciences; federal and private agencies that fund research in the mathematical and chemical sciences; selected industrial and government research and development laboratories; developers of software and hardware for computational chemistry; and selected individual researchers. Chapter 2 of this report covers some history of computational chemistry for the nonspecialist, while Chapter 3 illustrates the fruits of some past successful cross-fertilization between mathematical scientists and computational/theoretical chemists. In Chapter 4 the committee has assembled a representative, but not exhaustive, survey of research opportunities. Most of these are descriptions of important open problems in computational/theoretical chemistry that could gain much from the efforts of innovative mathematical scientists, written so as to be accessible introductions to the nonspecialist. Chapter 5 is an assessment, necessarily subjective, of cultural differences that must be overcome if collaborative work is to be encouraged between the mathematical and the chemical communities. Finally, the report ends with a brief list of conclusions and recommendations that, if followed, could promote accelerated progress at this interface. Recognizing that bothersome language issues can inhibit prospects for collaborative research at the interface between distinctive disciplines, the committee has attempted throughout to maintain an accessible style, in part by using illustrative boxes, and has included at the end of the report a glossary of technical terms that may be familiar to only a subset of the target audiences listed above.

  20. Preface to special issue of selected papers from Theoretical, Experimental, and Computational Mechanics (TECM)

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Sarlak Chivaee, Hamid; Hattel, Jesper Henri

    2017-01-01

    We are pleased to introduce this special issue of the Applied Mathematical Modelling journal with highlights from theTheoretical, Experimental, and Computational Mechanics Symposium (TECM-2015). This special issue consists of four rigorouslyselected papers originally presented at TECM-2015...... as a part of the 13th International Conference of Numerical Analysisand Applied Mathematics 2015 (ICNAAM 2015), which was held on 23-29 September 2015 in Rhodes, Greece.The symposium attracted a broad range of international and local leaders in theoretical, experimental, and computational mechanics across...... various fields and application. The symposium did an excellent job of outlining the current landscape of computational mechanics and its capabilities in solving complex industrial problems in the process industries, and we agree with the editor-in-chief of the journal that it is certainly worthwhile...

  1. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  2. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  3. Theoretical calculation possibilities of the computer code HAMMER

    International Nuclear Information System (INIS)

    Onusic Junior, J.

    1978-06-01

    With the aim to know the theoretical calculation possibilities of the computer code HAMMER, developed at Savanah River Laboratory, a analysis of the crytical cells assembly of the kind utilized in PWR reactors is made. (L.F.S.) [pt

  4. Quantum Wells, Wires and Dots Theoretical and Computational Physics of Semiconductor Nanostructures

    CERN Document Server

    Harrison, Paul

    2011-01-01

    Quantum Wells, Wires and Dots, 3rd Edition is aimed at providing all the essential information, both theoretical and computational, in order that the reader can, starting from essentially nothing, understand how the electronic, optical and transport properties of semiconductor heterostructures are calculated. Completely revised and updated, this text is designed to lead the reader through a series of simple theoretical and computational implementations, and slowly build from solid foundations, to a level where the reader can begin to initiate theoretical investigations or explanations of their

  5. Theoretical models to predict the mechanical behavior of thick composite tubes

    Directory of Open Access Journals (Sweden)

    Volnei Tita

    2012-02-01

    Full Text Available This paper shows theoretical models (analytical formulations to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.

  6. Theoretical computer science and the natural sciences

    Science.gov (United States)

    Marchal, Bruno

    2005-12-01

    last section, I come back to self-reference and I give an exposition of its modal logics. This is used to show that theoretical computer science makes those “philosophical hypotheses” in theoretical cognitive science experimentally and mathematically testable.

  7. Computational models of airway branching morphogenesis.

    Science.gov (United States)

    Varner, Victor D; Nelson, Celeste M

    2017-07-01

    The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  9. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  10. Complexity vs energy: theory of computation and theoretical physics

    International Nuclear Information System (INIS)

    Manin, Y I

    2014-01-01

    This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.

  11. Theoretical model for the mechanical behavior of prestressed beams under torsion

    Directory of Open Access Journals (Sweden)

    Sérgio M.R. Lopes

    2014-12-01

    Full Text Available In this article, a global theoretical model previously developed and validated by the authors for reinforced concrete beams under torsion is reviewed and corrected in order to predict the global behavior of beams under torsion with uniform longitudinal prestress. These corrections are based on the introduction of prestress factors and on the modification of the equilibrium equations in order to incorporate the contribution of the prestressing reinforcement. The theoretical results obtained with the new model are compared with some available results of prestressed concrete (PC beams under torsion found in the literature. The results obtained in this study validate the proposed computing procedure to predict the overall behavior of PC beams under torsion.

  12. A game theoretic model of the Northwestern European electricity market-market power and the environment

    NARCIS (Netherlands)

    Lise, W.; Linderhof, V.G.M.; Kuik, O.; Kemfert, C.; Ostling, R.; Heinzow, T.

    2006-01-01

    This paper develops a static computational game theoretic model. Illustrative results for the liberalising European electricity market are given to demonstrate the type of economic and environmental results that can be generated with the model. The model is empirically calibrated to eight

  13. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  14. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  15. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  16. Theoretical background and user's manual for the computer code on groundwater flow and radionuclide transport calculation in porous rock

    International Nuclear Information System (INIS)

    Shirakawa, Toshihiko; Hatanaka, Koichiro

    2001-11-01

    In order to document a basic manual about input data, output data, execution of computer code on groundwater flow and radionuclide transport calculation in heterogeneous porous rock, we investigated the theoretical background about geostatistical computer codes and the user's manual for the computer code on groundwater flow and radionuclide transport which calculates water flow in three dimension, the path of moving radionuclide, and one dimensional radionuclide migration. In this report, based on above investigation we describe the geostatistical background about simulating heterogeneous permeability field. And we describe construction of files, input and output data, a example of calculating of the programs which simulates heterogeneous permeability field, and calculates groundwater flow and radionuclide transport. Therefore, we can document a manual by investigating the theoretical background about geostatistical computer codes and the user's manual for the computer code on groundwater flow and radionuclide transport calculation. And we can model heterogeneous porous rock and analyze groundwater flow and radionuclide transport by utilizing the information from this report. (author)

  17. Computer Modeling of Platinum Reforming Reactors | Momoh ...

    African Journals Online (AJOL)

    This paper, instead of using a theoretical approach has considered a computer model as means of assessing the reformate composition for three-stage fixed bed reactors in platforming unit. This is done by identifying many possible hydrocarbon transformation reactions that are peculiar to the process unit, identify the ...

  18. PREFACE: 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics & 38th National Conference on Theoretical Physics

    Science.gov (United States)

    2014-09-01

    This volume contains selected papers presented at the 38th National Conference on Theoretical Physics (NCTP-38) and the 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics (IWTCP-1). Both the conference and the workshop were held from 29 July to 1 August 2013 in Pullman hotel, Da Nang, Vietnam. The IWTCP-1 was a new activity of the Vietnamese Theoretical Physics Society (VTPS) organized in association with the 38th National Conference on Theoretical Physics (NCTP-38), the most well-known annual scientific forum dedicated to the dissemination of the latest development in the field of theoretical physics within the country. The IWTCP-1 was also an External Activity of the Asia Pacific Center for Theoretical Physics (APCTP). The overriding goal of the IWTCP is to provide an international forum for scientists and engineers from academia to share ideas, problems and solution relating to the recent advances in theoretical physics as well as in computational physics. The main IWTCP motivation is to foster scientific exchanges between the Vietnamese theoretical and computational physics community and world-wide scientists as well as to promote high-standard level of research and education activities for young physicists in the country. About 110 participants coming from 10 countries participated in the conference and the workshop. 4 invited talks, 18 oral contributions and 46 posters were presented at the conference. In the workshop we had one keynote lecture and 9 invited talks presented by international experts in the fields of theoretical and computational physics, together with 14 oral and 33 poster contributions. The proceedings were edited by Nguyen Tri Lan, Trinh Xuan Hoang, and Nguyen Ai Viet. We would like to thank all invited speakers, participants and sponsors for making the conference and the workshop successful. Nguyen Ai Viet Chair of NCTP-38 and IWTCP-1

  19. Comparison between theoretical and experimental results of the 1/6 scale concrete model under internal pressure

    International Nuclear Information System (INIS)

    Riviere, J.; Barbe, B.; Millard, A.; Koundy, V.

    1988-01-01

    The prevision of the behavior of the 1/6 scale concrete model under internal pressure was realized by means of two computations, the first one with an infinite soil rigidity, the second one with a soil rigidity equal to 61.26 MPa/m. These two computations, that assumed a perfectly axisymetric structure gave theoretical and experimental results in good agreement, except the raft of which the theoretical uplift was three times higher than the experimental one. The main conclusions of this study are as follow: the soil stiffness has no influence on the ultimate behavior of the model, the dead concrete rigidity decreases the raft uplift in an important way, the model is destroyed because the hoop stress reaches the ultimate strength

  20. Franchise Business Model: Theoretical Insights

    OpenAIRE

    Levickaitė, Rasa; Reimeris, Ramojus

    2010-01-01

    The article is based on literature review, theoretical insights, and deals with the topic of franchise business model. The objective of the paper is to analyse peculiarities of franchise business model and its developing conditions in Lithuania. The aim of the paper is to make an overview on franchise business model and its environment in Lithuanian business context. The overview is based on international and local theoretical insights. In terms of practical meaning, this article should be re...

  1. Report of the Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel

    International Nuclear Information System (INIS)

    1984-09-01

    The Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel (HEPAP) was formed in July 1984 to make recommendations concerning the need for state-of-the-art computing for theoretical studies. The specific Charge to the Subpanel is attached as Appendix A, and the full membership is listed in Appendix B. For the purposes of this study, theoretical computing was interpreted as encompassing both investigations in the theory of elementary particles and computation-intensive aspects of accelerator theory and design. Many problems in both areas are suited to realize the advantages of vectorized processing. The body of the Subpanel Report is organized as follows. The Introduction, Section I, explains some of the goals of computational physics as it applies to elementary particle theory and accelerator design. Section II reviews the availability of mainframe supercomputers to researchers in the United States, in Western Europe, and in Japan. Other promising approaches to large-scale computing are summarized in Section III. Section IV details the current computing needs for problems in high energy theory, and for beam dynamics studies. The Subpanel Recommendations appear in Section V. The Appendices attached to this Report give the Charge to the Subpanel, the Subpanel membership, and some background information on the financial implications of establishing a supercomputer center

  2. Improving the theoretical foundations of the multi-mode transport model

    International Nuclear Information System (INIS)

    Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.

    1999-01-01

    A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)

  3. Improving the theoretical foundations of the multi-mode transport model

    International Nuclear Information System (INIS)

    Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.

    2001-01-01

    A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)

  4. Mathematical and theoretical neuroscience cell, network and data analysis

    CERN Document Server

    Nieus, Thierry

    2017-01-01

    This volume gathers contributions from theoretical, experimental and computational researchers who are working on various topics in theoretical/computational/mathematical neuroscience. The focus is on mathematical modeling, analytical and numerical topics, and statistical analysis in neuroscience with applications. The following subjects are considered: mathematical modelling in Neuroscience, analytical  and numerical topics;  statistical analysis in Neuroscience; Neural Networks; Theoretical Neuroscience. The book is addressed to researchers involved in mathematical models applied to neuroscience.

  5. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  6. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  7. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  8. Theory of computation

    CERN Document Server

    Tourlakis, George

    2012-01-01

    Learn the skills and acquire the intuition to assess the theoretical limitations of computer programming Offering an accessible approach to the topic, Theory of Computation focuses on the metatheory of computing and the theoretical boundaries between what various computational models can do and not do—from the most general model, the URM (Unbounded Register Machines), to the finite automaton. A wealth of programming-like examples and easy-to-follow explanations build the general theory gradually, which guides readers through the modeling and mathematical analysis of computational pheno

  9. Quantum computing with photons: introduction to the circuit model, the one-way quantum computer, and the fundamental principles of photonic experiments

    International Nuclear Information System (INIS)

    Barz, Stefanie

    2015-01-01

    Quantum physics has revolutionized our understanding of information processing and enables computational speed-ups that are unattainable using classical computers. This tutorial reviews the fundamental tools of photonic quantum information processing. The basics of theoretical quantum computing are presented and the quantum circuit model as well as measurement-based models of quantum computing are introduced. Furthermore, it is shown how these concepts can be implemented experimentally using photonic qubits, where information is encoded in the photons’ polarization. (tutorial)

  10. Explorations In Theoretical Computer Science For Kids (using paper toys)

    DEFF Research Database (Denmark)

    Valente, Andrea

    2004-01-01

    The computational card (c-cards for short) project is a study and realization of an educational tool based on playing cards. C-cards are an educational tool to introduce children 8 to 10 (or older) to the concept of computation, seen as manipulation of symbols. The game provides teachers...... and learners with a physical, tangible metaphor for exploring core concepts of computer science, such as deterministic and probabilistic state machines, frequencies and probability distributions, and the central elements of Shannon's information theory, like information, communication, errors and error...... detection. Our idea is implemented both with paper cards and by an editor/simulator software (a prototype realized in javascript). We also designed the structure of a course in (theoretical) computer science, based on c-cards, and we will test it this summer....

  11. Theoretical and computational analyses of LNG evaporator

    Science.gov (United States)

    Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong

    2017-04-01

    Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.

  12. THEORETICAL MODELING OF THE FEEDBACK STABILIZATION OF EXTERNAL MHD MODES IN TOROIDAL GEOMETRY

    International Nuclear Information System (INIS)

    CHANCE, M.S.; CHU, M.S.; OKABAYASHI, M.; TURNBULL, A.D.

    2001-02-01

    OAK-B135 A theoretical framework for understanding the feedback mechanism against external MHD modes has been formulated. Efficient computational tools--the GATO stability code coupled with a substantially modified VACUUM code--have been developed to effectively design viable feedback systems against these modes. The analysis assumed a thin resistive shell and a feedback coil structure accurately modeled in θ, with only a single harmonic variation in φ. Time constants and induced currents in the enclosing resistive shell are calculated. An optimized configuration based on an idealized model have been computed for the DIII-D device. Up to 90% of the effectiveness of an ideal wall can be achieved

  13. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  14. On turbulence models for rod bundle flow computations

    International Nuclear Information System (INIS)

    Hazi, Gabor

    2005-01-01

    In commercial computational fluid dynamics codes there is more than one turbulence model built in. It is the user responsibility to choose one of those models, suitable for the problem studied. In the last decade, several computations were presented using computational fluid dynamics for the simulation of various problems of the nuclear industry. A common feature in a number of those simulations is that they were performed using the standard k-ε turbulence model without justifying the choice of the model. The simulation results were rarely satisfactory. In this paper, we shall consider the flow in a fuel rod bundle as a case study and discuss why the application of the standard k-ε model fails to give reasonable results in this situation. We also show that a turbulence model based on the Reynolds stress transport equations can provide qualitatively correct results. Generally, our aim is pedagogical, we would like to call the readers attention to the fact that turbulence models have to be selected based on theoretical considerations and/or adequate information obtained from measurements

  15. Theoretical model estimation of guest diffusion in Metal-Organic Frameworks (MOFs)

    KAUST Repository

    Zheng, Bin

    2015-08-11

    Characterizing molecule diffusion in nanoporous matrices is critical to understanding the novel chemical and physical properties of metal-organic frameworks (MOFs). In this paper, we developed a theoretical model to fastly and accurately compute the diffusion rate of guest molecules in a zeolitic imidazolate framework-8 (ZIF-8). The ideal gas or equilibrium solution diffusion model was modified to contain the effect of periodical media via introducing the possibility of guests passing through the framework gate. The only input in our model is the energy barrier of guests passing through the MOF’s gate. Molecular dynamics (MD) methods were employed to gather the guest density profile, which then was used to deduce the energy barrier values. This produced reliable results that require a simulation time of 5 picoseconds, which is much shorter when using pure MD methods (in the billisecond scale) . Also, we used density functional theory (DFT) methods to obtain the energy profile of guests passing through gates, as this does not require specification of a force field for the MOF degrees of freedom. In the DFT calculation, we only considered one gate of MOFs each time; as this greatly reduced the computational cost. Based on the obtained energy barrier values we computed the diffusion rate of alkane and alcohol in ZIF-8 using our model, which was in good agreement with experimental test results and the calculation values from standard MD model. Our model shows the advantage of obtaining accurate diffusion rates for guests in MOFs for a lower computational cost and shorter calculation time. Thus, our analytic model calculation is especially attractive for high-throughput computational screening of the dynamic performance of guests in a framework.

  16. Computational Modeling of Oxygen Transport in the Microcirculation: From an Experiment-Based Model to Theoretical Analyses

    OpenAIRE

    Lücker, Adrien

    2017-01-01

    Oxygen supply to cells by the cardiovascular system involves multiple physical and chemical processes that aim to satisfy fluctuating metabolic demand. Regulation mechanisms range from increased heart rate to minute adaptations in the microvasculature. The challenges and limitations of experimental studies in vivo make computational models an invaluable complement. In this thesis, oxygen transport from capillaries to tissue is investigated using a new numerical model that is tailored for vali...

  17. Quantum wells, wires and dots theoretical and computational physics of semiconductor nanostructures

    CERN Document Server

    Harrison, Paul

    2016-01-01

    Quantum Wells, Wires and Dots provides all the essential information, both theoretical and computational, to develop an understanding of the electronic, optical and transport properties of these semiconductor nanostructures. The book will lead the reader through comprehensive explanations and mathematical derivations to the point where they can design semiconductor nanostructures with the required electronic and optical properties for exploitation in these technologies. This fully revised and updated 4th edition features new sections that incorporate modern techniques and extensive new material including: - Properties of non-parabolic energy bands - Matrix solutions of the Poisson and Schrodinger equations - Critical thickness of strained materials - Carrier scattering by interface roughness, alloy disorder and impurities - Density matrix transport modelling -Thermal modelling Written by well-known authors in the field of semiconductor nanostructures and quantum optoelectronics, this user-friendly guide is pr...

  18. Computational Modeling and Theoretical Calculations on the Interactions between Spermidine and Functional Monomer (Methacrylic Acid in a Molecularly Imprinted Polymer

    Directory of Open Access Journals (Sweden)

    Yujie Huang

    2015-01-01

    Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.

  19. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  20. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  1. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  2. Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.

    Science.gov (United States)

    Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D

    2018-05-04

    A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.

  3. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  4. Computational models of music perception and cognition II: Domain-specific music processing

    Science.gov (United States)

    Purwins, Hendrik; Grachten, Maarten; Herrera, Perfecto; Hazan, Amaury; Marxer, Ricard; Serra, Xavier

    2008-09-01

    In Part I [Purwins H, Herrera P, Grachten M, Hazan A, Marxer R, Serra X. Computational models of music perception and cognition I: The perceptual and cognitive processing chain. Physics of Life Reviews 2008, in press, doi:10.1016/j.plrev.2008.03.004], we addressed the study of cognitive processes that underlie auditory perception of music, and their neural correlates. The aim of the present paper is to summarize empirical findings from music cognition research that are relevant to three prominent music theoretic domains: rhythm, melody, and tonality. Attention is paid to how cognitive processes like category formation, stimulus grouping, and expectation can account for the music theoretic key concepts in these domains, such as beat, meter, voice, consonance. We give an overview of computational models that have been proposed in the literature for a variety of music processing tasks related to rhythm, melody, and tonality. Although the present state-of-the-art in computational modeling of music cognition definitely provides valuable resources for testing specific hypotheses and theories, we observe the need for models that integrate the various aspects of music perception and cognition into a single framework. Such models should be able to account for aspects that until now have only rarely been addressed in computational models of music cognition, like the active nature of perception and the development of cognitive capacities from infancy to adulthood.

  5. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  6. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab

  7. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  8. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  9. Theory, Modeling, Software and Hardware Development for Analytical and Computational Materials Science

    Science.gov (United States)

    Young, Gerald W.; Clemons, Curtis B.

    2004-01-01

    The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.

  10. Assessing a Theoretical Model on EFL College Students

    Science.gov (United States)

    Chang, Yu-Ping

    2011-01-01

    This study aimed to (1) integrate relevant language learning models and theories, (2) construct a theoretical model of college students' English learning performance, and (3) assess the model fit between empirically observed data and the theoretical model proposed by the researchers of this study. Subjects of this study were 1,129 Taiwanese EFL…

  11. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  12. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  13. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  14. Graph theoretical model of a sensorimotor connectome in zebrafish.

    Science.gov (United States)

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  15. A Theoretical Model for Meaning Construction through Constructivist Concept Learning

    DEFF Research Database (Denmark)

    Badie, Farshad

    The central focus of this Ph.D. research is on ‘Logic and Cognition’ and, more specifically, this research covers the quintuple (Logic and Logical Philosophy, Philosophy of Education, Educational Psychology, Cognitive Science, Computer Science). The most significant contributions of this Ph.D. di...... of ‘learning’, ‘mentoring’, and ‘knowledge’ within learning and knowledge acquisition systems. Constructivism as an epistemology and as a model of knowing and, respectively as a theoretical model of learning builds up the central framework of this research........D. dissertation are conceptual, logical, terminological, and semantic analysis of Constructivist Concept Learning (specifically, in the context of humans’ interactions with their environment and with other agents). This dissertation is concerned with the specification of the conceptualisation of the phenomena...

  16. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  17. The theoretical and computational models of the GASFLOW-II code

    International Nuclear Information System (INIS)

    Travis, J.R.

    1999-01-01

    GASFLOW-II is a finite-volume computer code that solves the time-dependent compressible Navier-Stokes equations for multiple gas species in a dispersed liquid water two-phase medium. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting gases to simulate diffusion or propagating flames in complex geometries of nuclear containments. GASFLOW-II is therefore able to predict gaseous distributions and thermal and pressure loads on containment structures and safety related equipment in the event combustion occurs. Current developments of GASFLOW-II are focused on hydrogen distribution, mitigation measures including carbon dioxide inerting, and possible combustion events in nuclear reactor containments. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. Condensation, vaporization, and heat transfer to walls, floors, ceilings, internal structures, and within the fluid are calculated to model the appropriate mass and energy sinks. (author)

  18. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  19. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.

    Science.gov (United States)

    Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y

    2016-04-01

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.

  20. Recent progress and modern challenges in applied mathematics, modeling and computational science

    CERN Document Server

    Makarov, Roman; Belair, Jacques

    2017-01-01

    This volume is an excellent resource for professionals in various areas of applications of mathematics, modeling, and computational science. It focuses on recent progress and modern challenges in these areas. The volume provides a balance between fundamental theoretical and applied developments, emphasizing the interdisciplinary nature of modern trends and detailing state-of-the-art achievements in Applied Mathematics, Modeling, and Computational Science.  The chapters have been authored by international experts in their respective fields, making this book ideal for researchers in academia, practitioners, and graduate students. It can also serve as a reference in the diverse selected areas of applied mathematics, modelling, and computational sciences, and is ideal for interdisciplinary collaborations.

  1. Theoretical chemistry advances and perspectives

    CERN Document Server

    Eyring, Henry

    1980-01-01

    Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef

  2. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    Energy Technology Data Exchange (ETDEWEB)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico]. E-mails: fsrlessa@gmail.com; gmplatt@iprj.uerj.br; halves@iprj.uerj.br

    2007-07-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  3. Utilizing of computational tools on the modelling of a simplified problem of neutron shielding

    International Nuclear Information System (INIS)

    Lessa, Fabio da Silva Rangel; Platt, Gustavo Mendes; Alves Filho, Hermes

    2007-01-01

    In the current technology level, the investigation of several problems is studied through computational simulations whose results are in general satisfactory and much less expensive than the conventional forms of investigation (e.g., destructive tests, laboratory measures, etc.). Almost all of the modern scientific studies are executed using computational tools, as computers of superior capacity and their systems applications to make complex calculations, algorithmic iterations, etc. Besides the considerable economy in time and in space that the Computational Modelling provides, there is a financial economy to the scientists. The Computational Modelling is a modern methodology of investigation that asks for the theoretical study of the identified phenomena in the problem, a coherent mathematical representation of such phenomena, the generation of a numeric algorithmic system comprehensible for the computer, and finally the analysis of the acquired solution, or still getting use of pre-existent systems that facilitate the visualization of these results (editors of Cartesian graphs, for instance). In this work, was being intended to use many computational tools, implementation of numeric methods and a deterministic model in the study and analysis of a well known and simplified problem of nuclear engineering (the neutron transport), simulating a theoretical problem of neutron shielding with physical-material hypothetical parameters, of neutron flow in each space junction, programmed with Scilab version 4.0. (author)

  4. A computational model of self-efficacy's various effects on performance: Moving the debate forward.

    Science.gov (United States)

    Vancouver, Jeffrey B; Purl, Justin D

    2017-04-01

    Self-efficacy, which is one's belief in one's capacity, has been found to both positively and negatively influence effort and performance. The reasons for these different effects have been a major topic of debate among social-cognitive and perceptual control theorists. In particular, the findings of various self-efficacy effects has been motivated by a perceptual control theory view of self-regulation that social-cognitive theorists' question. To provide more clarity to the theoretical arguments, a computational model of the multiple processes presumed to create the positive, negative, and null effects for self-efficacy is presented. Building on an existing computational model of goal choice that produces a positive effect for self-efficacy, the current article adds a symbolic processing structure used during goal striving that explains the negative self-efficacy effect observed in recent studies. Moreover, the multiple processes, operating together, allow the model to recreate the various effects found in a published study of feedback ambiguity's moderating role on the self-efficacy to performance relationship (Schmidt & DeShon, 2010). Discussion focuses on the implications of the model for the self-efficacy debate, alternative computational models, the overlap between control theory and social-cognitive theory explanations, the value of using computational models for resolving theoretical disputes, and future research and directions the model inspires. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Computational fluid dynamics and population balance modelling of nucleate boiling of cryogenic liquids: Theoretical developments

    Directory of Open Access Journals (Sweden)

    Guan Heng Yeoh

    2016-12-01

    Full Text Available The main focus in the analysis of pool or flow boiling in saturated or subcooled conditions is the basic understanding of the phase change process through the heat transfer and wall heat flux partitioning at the heated wall and the two-phase bubble behaviours in the bulk liquid as they migrate away from the heated wall. This paper reviews the work in this rapid developing area with special reference to modelling nucleate boiling of cryogenic liquids in the context of computational fluid dynamics and associated theoretical developments. The partitioning of the wall heat flux at the heated wall into three components – single-phase convection, transient conduction and evaporation – remains the most popular mechanistic approach in predicting the heat transfer process during boiling. Nevertheless, the respective wall heat flux components generally require the determination of the active nucleation site density, bubble departure diameter and nucleation frequency, which are crucial to the proper prediction of the heat transfer process. Numerous empirical correlations presented in this paper have been developed to ascertain these three important parameters with some degree of success. Albeit the simplicity of empirical correlations, they remain applicable to only a narrow range of flow conditions. In order to extend the wall heat flux partitioning approach to a wider range of flow conditions, the fractal model proposed for the active nucleation site density, force balance model for bubble departing from the cavity and bubble lifting off from the heated wall and evaluation of nucleation frequency based on fundamental theory depict the many enhancements that can improve the mechanistic model predictions. The macroscopic consideration of the two-phase boiling in the bulk liquid via the two-fluid model represents the most effective continuum approach in predicting the volume fraction and velocity distributions of each phase. Nevertheless, the

  6. Computational and theoretical modeling of pH and flow effects on the early-stage non-equilibrium self-assembly of optoelectronic peptides

    Science.gov (United States)

    Mansbach, Rachael; Ferguson, Andrew

    Self-assembling π-conjugated peptides are attractive candidates for the fabrication of bioelectronic materials possessing optoelectronic properties due to electron delocalization over the conjugated peptide groups. We present a computational and theoretical study of an experimentally-realized optoelectronic peptide that displays triggerable assembly in low pH to resolve the microscopic effects of flow and pH on the non-equilibrium morphology and kinetics of assembly. Using a combination of molecular dynamics simulations and hydrodynamic modeling, we quantify the time and length scales at which convective flows employed in directed assembly compete with microscopic diffusion to influence assembly. We also show that there is a critical pH below which aggregation proceeds irreversibly, and quantify the relationship between pH, charge density, and aggregate size. Our work provides new fundamental understanding of pH and flow of non-equilibrium π-conjugated peptide assembly, and lays the groundwork for the rational manipulation of environmental conditions and peptide chemistry to control assembly and the attendant emergent optoelectronic properties. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award # DE-SC0011847, and by the Computational Science and Engineering Fellowship from the University of Illinois at Urbana-Champaign.

  7. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  8. Graph theoretical model of a sensorimotor connectome in zebrafish.

    Directory of Open Access Journals (Sweden)

    Michael Stobb

    Full Text Available Mapping the detailed connectivity patterns (connectomes of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  9. Development of tools and models for computational fracture assessment

    International Nuclear Information System (INIS)

    Talja, H.; Santaoja, K.

    1998-01-01

    The aim of the work presented in this paper has been to develop and test new computational tools and theoretically more sound methods for fracture mechanical analysis. The applicability of the engineering integrity assessment system MASI for evaluation of piping components has been extended. The most important motivation for the theoretical development have been the well-known fundamental limitations in the validity of J-integral, which limits its applicability in many important practical safety assessment cases. Examples are extensive plastic deformation, multimaterial structures and ascending loading paths (especially warm prestress, WPS). Further, the micromechanical Gurson model has been applied to several reactor pressure vessel materials. Special attention is paid to the transferability of Gurson model parameters from tensile test results to prediction of ductile failure behaviour of cracked structures. (author)

  10. Computational Modelling of the Structural Integrity following Mass-Loss in Polymeric Charred Cellular Solids

    OpenAIRE

    J. P. M. Whitty; J. Francis; J. Howe; B. Henderson

    2014-01-01

    A novel computational technique is presented for embedding mass-loss due to burning into the ANSYS finite element modelling code. The approaches employ a range of computational modelling methods in order to provide more complete theoretical treatment of thermoelasticity absent from the literature for over six decades. Techniques are employed to evaluate structural integrity (namely, elastic moduli, Poisson’s ratios, and compressive brittle strength) of honeycomb systems known to approximate t...

  11. Algebraic Specifications, Higher-order Types and Set-theoretic Models

    DEFF Research Database (Denmark)

    Kirchner, Hélène; Mosses, Peter David

    2001-01-01

    , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic  specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....

  12. A Simple theoretical model for 63Ni betavoltaic battery

    International Nuclear Information System (INIS)

    ZUO, Guoping; ZHOU, Jianliang; KE, Guotu

    2013-01-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for 63 Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for 63 Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to 63 Ni and 147 Pm betavoltaic batteries. - Highlights: • The energy deposition distribution is found following an approximate exponential decay law when beta particles emitted from 63 Ni pass through a semiconductor. • A simple theoretical model for 63 Ni betavoltaic battery is constructed based on the exponential decay law. • Theoretical model can be applied to the betavoltaic batteries which radioactive source has a similar energy spectrum with 63 Ni, such as 147 Pm

  13. Modeling the state dependent impulse control for computer virus propagation under media coverage

    Science.gov (United States)

    Liang, Xiyin; Pei, Yongzhen; Lv, Yunfei

    2018-02-01

    A state dependent impulsive control model is proposed to model the spread of computer virus incorporating media coverage. By the successor function, the sufficient conditions for the existence and uniqueness of order-1 periodic solution are presented first. Secondly, for two classes of periodic solutions, the geometric property of successor function and the analogue of the Poincaré criterion are employed to obtain the stability results. These results show that the number of the infective computers is under the threshold all the time. Finally, the theoretic and numerical analysis show that media coverage can delay the spread of computer virus.

  14. Modeling opinion dynamics: Theoretical analysis and continuous approximation

    International Nuclear Information System (INIS)

    Pinasco, Juan Pablo; Semeshenko, Viktoriya; Balenzuela, Pablo

    2017-01-01

    Highlights: • We study a simple model of persuasion dynamics with long range pairwise interactions. • The continuous limit of the master equation is a nonlinear, nonlocal, first order partial differential equation. • We compute the analytical solutions to this equation, and compare them with the simulations of the dynamics. - Abstract: Frequently we revise our first opinions after talking over with other individuals because we get convinced. Argumentation is a verbal and social process aimed at convincing. It includes conversation and persuasion and the agreement is reached because the new arguments are incorporated. Given the wide range of opinion formation mathematical approaches, there are however no models of opinion dynamics with nonlocal pair interactions analytically solvable. In this paper we present a novel analytical framework developed to solve the master equations with non-local kernels. For this we used a simple model of opinion formation where individuals tend to get more similar after each interactions, no matter their opinion differences, giving rise to nonlinear differential master equation with non-local terms. Simulation results show an excellent agreement with results obtained by the theoretical estimation.

  15. Software for energy modelling: a theoretical basis for improvements in the user interface

    Energy Technology Data Exchange (ETDEWEB)

    Siu, Y.L.

    1989-09-01

    A philosophical critique of the relationships between theory, knowledge and practice for a range of existing energy modelling styles is presented. In particular, Habermas's ideas are invoked regarding the three spheres of cognitive interest (i.e. technical, practical and emancipatory) and three levels of understanding of knowledge, the construction of an 'ideal speech situation', and the theory of communicative competence and action. These are adopted as a basis for revealing shortcomings of a representative selection of existing computer-based energy modelling styles, and as a springboard for constructing a new theoretical approach. (author).

  16. Theoretical models of neutron emission in fission

    International Nuclear Information System (INIS)

    Madland, D.G.

    1992-01-01

    A brief survey of theoretical representations of two of the observables in neutron emission in fission is given, namely, the prompt fission neutron spectrum N(E) and the average prompt neutron multiplicity bar v p . Early representations of the two observables are presented and their deficiencies are discussed. This is followed by summaries and examples of recent theoretical models for the calculation of these quantities. Emphasis is placed upon the predictability and accuracy of the new models. In particular, the dependencies of N(E) and bar v p upon the fissioning nucleus and its excitation energy are treated. Recent work in the calculation of the prompt fission neutron spectrum matrix N(E,E n ), where E n is the energy of the neutron inducing fission, is then discussed. Concluding remarks address the current status of our ability to calculate these observables with confidence, the direction of future theoretical efforts, and limititations to current and future calculations. Finally, recommendations are presented as to which model should be used currently and which model should be pursued in future efforts

  17. Theoretical modeling of the feedback stabilization of external MHD modes of toroidal geometry

    International Nuclear Information System (INIS)

    Chance, M.S.; Chu, M.S.; Okabayashi, M.

    2001-01-01

    A theoretical framework for understanding the feedback mechanism against external MHD modes has been formulated. Efficient computational tools - the GATO stability code coupled with a substantially modified VACUUM code - have been developed to effectively design viable feedback systems against these modes. The analysis assumed a thin resistive shell and a feedback coil structure accurately modeled in θ, with only a single harmonic variation in φ. An optimized configuration and placement of the feedback and sensor coils as well as the time constants and induced currents in the enclosing resistive shell have been computed for the DIII-D device. Up to 90% of the effectiveness of an ideal wall can be achieved. (author)

  18. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  19. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  20. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  1. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  2. Theoretical models for recombination in expanding gas

    International Nuclear Information System (INIS)

    Avron, Y.; Kahane, S.

    1978-09-01

    In laser isotope separation of atomic uranium, one is confronted with the theoretical problem of estimating the concentration of thermally ionized uranium atoms. To investigate this problem theoretical models for recombination in an expanding gas and in the absence of local thermal equilibrium have been constructed. The expansion of the gas is described by soluble models of the hydrodynamic equation, and the recombination by rate equations. General results for the freezing effect for the suitable ranges of the gas parameters are obtained. The impossibility of thermal equilibrium in expanding two-component systems is proven

  3. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  4. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  5. SPECT: Theoretical aspecte and evolution of emission computed axial tomography

    International Nuclear Information System (INIS)

    Brunol, J.; Nuta, V.

    1981-01-01

    We have detailed certain of the elements of 3-D image reconstruction from axial projections. Two of the aspects specific to nuclear medicine have been analysed namely self-absorption and statistics. In our view, the development of ECAT in the months to come must hence proceed in two essential directions: - application to dynamic cardiac imagery (multigated). Results of this type have been obtained over 8 months in the Radioisotope Service of Cochin Hospital in Paris. It must be stressed here that the number of images to be processed then becomes considerable (multiplication by the gate factor yielding more than 100 images), the more the statistics are reduced due to the fact of the temporal separation. The obtaining of good image quality requires sophisticated quadri-dimensional processing. It follows that the computing times, with all the mini-computers available in nuclear medicine, then become much too great to envisage really application in hospital routine (several hours of computing). This is the reason why we connected an array processor with the IMAC system. This very powerful system (several tens of times the power of a mini-computer) will reduce the time of such computing to less than 10 minutes. New elements can be introduced into the reconstruction algorithm (static case opposite the foregoing one). These important elements of improvement are to the detriment of space and hence of computing time. Here again, the use of an array processor appears indispensable. It is to recall that the ECAT is today a currently used method, the theoretical analyses that it has necessitated have opened the way to new effective methods of tomography by 'Slanted Hole'. (orig.) [de

  6. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  7. Stability and Hopf Bifurcation in a Computer Virus Model with Multistate Antivirus

    Directory of Open Access Journals (Sweden)

    Tao Dong

    2012-01-01

    Full Text Available By considering that people may immunize their computers with countermeasures in susceptible state, exposed state and using anti-virus software may take a period of time, a computer virus model with time delay based on an SEIR model is proposed. We regard time delay as bifurcating parameter to study the dynamical behaviors which include local asymptotical stability and local Hopf bifurcation. By analyzing the associated characteristic equation, Hopf bifurcation occurs when time delay passes through a sequence of critical value. The linerized model and stability of the bifurcating periodic solutions are also derived by applying the normal form theory and the center manifold theorem. Finally, an illustrative example is also given to support the theoretical results.

  8. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  9. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  10. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  11. A theoretical model on surface electronic behavior: Strain effect

    International Nuclear Information System (INIS)

    Qin, W.G.; Shaw, D.

    2009-01-01

    Deformation from mechanical loading can affect surface electronic behavior. Surface deformation and electronic behavior can be quantitatively expressed using strain and work function, respectively, and their experimental relationship can be readily determined using the Kelvin probing technique. However, the theoretical correlation between work function and strain has been unclear. This study reports our theoretical exploration, for the first time, of the effect of strain on work function. We propose a simple electrostatic action model by considering the effect of a dislocation on work function of a one-dimensional lattice and further extend this model to the complex conditions for the effect of dislocation density. Based on this model, we established successfully a theoretical correlation between work function and strain.

  12. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    Science.gov (United States)

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  13. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  14. Empathy and child neglect: a theoretical model.

    Science.gov (United States)

    De Paul, Joaquín; Guibert, María

    2008-11-01

    To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect is considered as a specific type of non-helping behavior. The central hypothesis of the theoretical model presented here suggests that neglectful parents cannot develop the helping response set to care for their children because the observation of a child's signal of need does not lead to the experience of emotions that motivate helping or because the parents experience these emotions, but specific cognitions modify the motivation to help. The present theoretical model suggests that different typologies of neglectful parents could be developed based on different reasons that parents might not to experience emotions that motivate helping behaviors. The model can be helpful to promote new empirical studies about the etiology of different groups of neglectful families.

  15. Theoretical modeling and experimental study on fatigue initiation life of 16MnR notched components

    International Nuclear Information System (INIS)

    Wang Xiaogui; Gao Zengliang; Qiu Baoxiang; Jiang Yanrao

    2010-01-01

    In order to investigate the effects of notch geometry and loading conditions on the fatigue initiation life and fatigue fracture life of 16MnR material, fatigue experiments were conducted for both smooth rod specimens and notched rod specimens. The detailed elastic-plastic stress and strain responses were computed by the finite element software (ABAQUS) incorporating a robust cyclic plasticity model via a user subroutine UMAT. The obtained stresses and strains were applied to the multiaxial fatigue damage criterion to compute the fatigue damage induced by a loading cycle on the critical material plane. The fatigue initiation life was then obtained by the proposed theoretical model. The well agreement between the predicted results and the experiment data indicated that the fatigue initiation of notched components in the multiaxial stress state related to all the nonzero stress and strain quantities. (authors)

  16. Computer modeling of liquid crystals

    International Nuclear Information System (INIS)

    Al-Barwani, M.S.

    1999-01-01

    In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)

  17. Global Bifurcation of a Novel Computer Virus Propagation Model

    Directory of Open Access Journals (Sweden)

    Jianguo Ren

    2014-01-01

    Full Text Available In a recent paper by J. Ren et al. (2012, a novel computer virus propagation model under the effect of the antivirus ability in a real network is established. The analysis there only partially uncovers the dynamics behaviors of virus spread over the network in the case where around bifurcation is local. In the present paper, by mathematical analysis, it is further shown that, under appropriate parameter values, the model may undergo a global B-T bifurcation, and the curves of saddle-node bifurcation, Hopf bifurcation, and homoclinic bifurcation are obtained to illustrate the qualitative behaviors of virus propagation. On this basis, a collection of policies is recommended to prohibit the virus prevalence. To our knowledge, this is the first time the global bifurcation has been explored for the computer virus propagation. Theoretical results and corresponding suggestions may help us suppress or eliminate virus propagation in the network.

  18. Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership.

    Science.gov (United States)

    Nishiura, Hiroshi; Rietman, Edward A; Wu, Rongling

    2013-07-11

    Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine.

  19. Theoretical model for investigating the dynamic behaviour of the AST-500 type nuclear heating station reactor

    International Nuclear Information System (INIS)

    Grundmann, U.; Rohde, U.; Naumann, B.

    1985-01-01

    Studies on theoretical simulation of the dynamic behaviour of the AST-500 type reactor primary coolant system are summarized. The first version of a dynamic model in the form of the DYNAST code is described. The DYNAST code is based on a one-dimensional description of the primary coolant circuit including core, draught stack, and intermediate heat exchanger, a vapour dome model, and the point model of neutron kinetics. With the aid of the steady-state computational part of the DYNAST code, studies have been performed on different steady-state operating conditions. Furthermore, some methodological investigations on generalization and improvement of the dynamic model are considered and results presented. (author)

  20. Global dynamics of a novel multi-group model for computer worms

    International Nuclear Information System (INIS)

    Gong Yong-Wang; Song Yu-Rong; Jiang Guo-Ping

    2013-01-01

    In this paper, we study worm dynamics in computer networks composed of many autonomous systems. A novel multi-group SIQR (susceptible-infected-quarantined-removed) model is proposed for computer worms by explicitly considering anti-virus measures and the network infrastructure. Then, the basic reproduction number of worm R 0 is derived and the global dynamics of the model are established. It is shown that if R 0 is less than or equal to 1, the disease-free equilibrium is globally asymptotically stable and the worm dies out eventually, whereas, if R 0 is greater than 1, one unique endemic equilibrium exists and it is globally asymptotically stable, thus the worm persists in the network. Finally, numerical simulations are given to illustrate the theoretical results. (general)

  1. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  2. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  3. Expanding Panjabi's stability model to express movement: a theoretical model.

    Science.gov (United States)

    Hoffman, J; Gabel, P

    2013-06-01

    Novel theoretical models of movement have historically inspired the creation of new methods for the application of human movement. The landmark theoretical model of spinal stability by Panjabi in 1992 led to the creation of an exercise approach to spinal stability. This approach however was later challenged, most significantly due to a lack of favourable clinical effect. The concepts explored in this paper address and consider the deficiencies of Panjabi's model then propose an evolution and expansion from a special model of stability to a general one of movement. It is proposed that two body-wide symbiotic elements are present within all movement systems, stability and mobility. The justification for this is derived from the observable clinical environment. It is clinically recognised that these two elements are present and identifiable throughout the body in different joints and muscles, and the neural conduction system. In order to generalise the Panjabi model of stability to include and illustrate movement, a matching parallel mobility system with the same subsystems was conceptually created. In this expanded theoretical model, the new mobility system is placed beside the existing stability system and subsystems. The ability of both stability and mobility systems to work in harmony will subsequently determine the quality of movement. Conversely, malfunction of either system, or their subsystems, will deleteriously affect all other subsystems and consequently overall movement quality. For this reason, in the rehabilitation exercise environment, focus should be placed on the simultaneous involvement of both the stability and mobility systems. It is suggested that the individual's relevant functional harmonious movements should be challenged at the highest possible level without pain or discomfort. It is anticipated that this conceptual expansion of the theoretical model of stability to one with the symbiotic inclusion of mobility, will provide new understandings

  4. Theoretical models for the muon spectrum at sea level

    International Nuclear Information System (INIS)

    Abdel-Monem, M.S.; Benbrook, J.R.; Osborne, A.R.; Sheldon, W.R.

    1975-01-01

    The absolute vertical cosmic ray muon spectrum is investigated theoretically. Models of high energy interactions (namely, Maeda-Cantrell (MC), Constant Energy (CE), Cocconi-Koester-Perkins (CKP) and Scaling Models) are used to calculate the spectrum of cosmic ray muons at sea level. A comparison is made between the measured spectrum and that predicted from each of the four theoretical models. It is concluded that the recently available measured muon differential intensities agree with the scaling model for energies less than 100 GeV and with the CKP model for energies greater than 200 GeV. The measured differential intensities (Abdel-Monem et al.) agree with scaling. (orig.) [de

  5. Theoretical study of phase behaviour of DLVO model for lysozyme and γ-crystalline aqueous electrolyte solutions

    Directory of Open Access Journals (Sweden)

    R. Melnyk

    2015-03-01

    Full Text Available Mean spherical approximation (MSA, second-order Barker-Henderson (BH perturbation theory and thermodynamic perturbation theory (TPT for associating fluids in combination with BH perturbation theory are applied to the study of the structural properties and phase behaviour of the Derjaguin-Landau-Verwey-Overbeek (DLVO model of lysozyme and γ-cristalline aqueous electrolyte solutions. Predictions of the MSA for the structure factors are in good agreement with the corresponding computer simulation predictions. The agreement between theoretical results for the liquid-gas phase diagram and the corresponding results of the experiment and computer simulation is less satisfactory, with predictions of the combined BH-TPT approach being the most accurate.

  6. The Padé approximant in theoretical physics

    CERN Document Server

    Baker, George Allen

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  7. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  8. A theoretical starspot model

    International Nuclear Information System (INIS)

    Jahn, K.

    1983-01-01

    A model of the monopoloidal and axisymmetric spot with the untwisted configuration of the magnetic field is considered and the influence of the magnetic field on the gas is described with the assumption that the magnetic field partially inhibits convective-energy transport. Series of starspot models have been computed for a zero-age main sequence star of one solar mass. Models are described by three free parameters: the total magnetic flux, the effective temperature of the spot and the position of the spot bottom. Obtained models of small spots can be compared with sunspot and there is a satisfactory agreement between our results and observations. (author)

  9. Computational multiscale modeling of fluids and solids theory and applications

    CERN Document Server

    Steinhauser, Martin Oliver

    2017-01-01

    The idea of the book is to provide a comprehensive overview of computational physics methods and techniques, that are used for materials modeling on different length and time scales. Each chapter first provides an overview of the basic physical principles which are the basis for the numerical and mathematical modeling on the respective length-scale. The book includes the micro-scale, the meso-scale and the macro-scale, and the chapters follow this classification. The book explains in detail many tricks of the trade of some of the most important methods and techniques that are used to simulate materials on the perspective levels of spatial and temporal resolution. Case studies are included to further illustrate some methods or theoretical considerations. Example applications for all techniques are provided, some of which are from the author’s own contributions to some of the research areas. The second edition has been expanded by new sections in computational models on meso/macroscopic scales for ocean and a...

  10. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  11. Theoretical aspects of spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alph...

  12. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  13. Computer-Aided Construction of Chemical Kinetic Models

    Energy Technology Data Exchange (ETDEWEB)

    Green, William H. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2014-12-31

    The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in “predictive” mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriate refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.

  14. Nano-Modeling and Computation in Bio and Brain Dynamics

    Directory of Open Access Journals (Sweden)

    Paolo Di Sia

    2016-04-01

    Full Text Available The study of brain dynamics currently utilizes the new features of nanobiotechnology and bioengineering. New geometric and analytical approaches appear very promising in all scientific areas, particularly in the study of brain processes. Efforts to engage in deep comprehension lead to a change in the inner brain parameters, in order to mimic the external transformation by the proper use of sensors and effectors. This paper highlights some crossing research areas of natural computing, nanotechnology, and brain modeling and considers two interesting theoretical approaches related to brain dynamics: (a the memory in neural network, not as a passive element for storing information, but integrated in the neural parameters as synaptic conductances; and (b a new transport model based on analytical expressions of the most important transport parameters, which works from sub-pico-level to macro-level, able both to understand existing data and to give new predictions. Complex biological systems are highly dependent on the context, which suggests a “more nature-oriented” computational philosophy.

  15. PREDICTING ATTENUATION OF VIRUSES DURING PERCOLATION IN SOILS: 2. USER'S GUIDE TO THE VIRULO 1.0 COMPUTER MODEL

    Science.gov (United States)

    In the EPA document Predicting Attenuation of Viruses During Percolation in Soils 1. Probabilistic Model the conceptual, theoretical, and mathematical foundations for a predictive screening model were presented. In this current volume we present a User's Guide for the computer mo...

  16. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  17. Theoretical spectral properties of PAHs: towards a detailed model of their photophysics in the ISM

    International Nuclear Information System (INIS)

    Malloci, Giuliano; Mulas, Giacomo; Porceddu, Ignazio

    2005-01-01

    In the framework of density functional theory (DFT) we computed the spectral properties of a total of about 20 polycyclic aromatic hydrocarbons (PAHs) in different charge states. From our complete atlas of PAHs, ranging in size from naphthalene (C 10 H 8 ) to dicoronylene (C 48 H 20 ), we present here a sample of results concerning both ground-state and excited-state properties. Our theoretical results are in reasonable agreement with the available experimental data. This makes them particularly precious when the latter are not easily obtainable, as is often the case for the highly reactive radicals and ions of such species. In another paper (Mulas et al., same volume) we show that our theoretical results can be reliably used to model the behaviour of these molecules in astrophysical environments

  18. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  19. Development and application of theoretical models for Rotating Detonation Engine flowfields

    Science.gov (United States)

    Fievisohn, Robert

    As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new

  20. PREFACE: 3rd Workshop on Theory, Modelling and Computational Methods for Semiconductors (TMCSIII)

    Science.gov (United States)

    Califano, Marco; Migliorato, Max; Probert, Matt

    2012-05-01

    These conference proceedings contain the written papers of the contributions presented at the 3rd International Conference on Theory, Modelling and Computational Methods for Semiconductor materials and nanostructures. The conference was held at the School of Electronic and Electrical Engineering, University of Leeds, Leeds, UK on 18-20 January 2012. The previous conferences in this series took place in 2010 at St William's College, York and in 2008 at the University of Manchester, UK. The development of high-speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational, optical and electronic properties of semiconductors and their hetero- and nano-structures. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in semiconductor science and technology, where there is substantial potential for time-saving in R&D. Theoretical approaches represented in this meeting included: Density Functional Theory, Tight Binding, Semiempirical Pseudopotential Methods, Effective Mass Models, Empirical Potential Methods and Multiscale Approaches. Topics included, but were not limited to: Optical and Transport Properties of Quantum Nanostructures including Colloids and Nanotubes, Plasmonics, Magnetic Semiconductors, Graphene, Lasers, Photonic Structures, Photovoltaic and Electronic Devices. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the theoretical modelling of Group IV, III-V and II-VI semiconductors, as well as students, postdocs and early-career researchers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students, with several lectures given by recognised experts in various theoretical approaches. The following two days showcased some of the best theoretical research carried out in the UK in this field, with several

  1. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    Science.gov (United States)

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  2. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  3. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  4. Theoretical aspects of the optical model

    International Nuclear Information System (INIS)

    Mahaux, C.

    1980-01-01

    We first recall the definition of the optical-model potential for nucleons and the physical interpretation of the main related quantities. We then survey the recent theoretical progress towards a reliable calculation of this potential. The present limitations of the theory and some prospects for future developments are outlined. (author)

  5. PELLET: a computer routine for modeling pellet fueling in tokamak plasmas

    International Nuclear Information System (INIS)

    Houlberg, W.A.; Iskra, M.A.; Howe, H.C.; Attenberger, S.E.

    1979-01-01

    Recent experimental results of frozen hydrogenic pellet injection into hot tokamak plasmas and substantial agreement with theoretical predictions have led to a much greater interest in pellets as a means of refueling plasmas. The computer routine PELLET has been developed and used as an aid in assessing pellet ablation models and the effects of pellets on plasma behavior. PELLET provides particle source profiles under various options for the ablation model and can be coupled either to a fluid transport code or to a brief routine which supplies the required input parameters

  6. Dynamics in Higher Education Politics: A Theoretical Model

    Science.gov (United States)

    Kauko, Jaakko

    2013-01-01

    This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…

  7. Theory and Computation

    Data.gov (United States)

    Federal Laboratory Consortium — Flexible computational infrastructure, software tools and theoretical consultation are provided to support modeling and understanding of the structure and properties...

  8. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  9. Model for diffusion and porewater chemistry in compacted bentonite. Theoretical basis and the solution methodology for the transport model

    International Nuclear Information System (INIS)

    Lehikoinen, J.

    1997-01-01

    This report describes the progress of the computer model for ionic transport in bentonite. The research is part of the project Microstructural and chemical parameters of bentonite as determinants of waste isolation efficiency within the Nuclear fission safety program organized by The Commission of the European Communities. The study was started by collecting a comprehensive body of available data on space-charge transport modelling and creating a conceptualization of the problem at hand. The numerical discretization of the governing equations by finite differences was also initiated. This report introduces the theoretical basis for the model, somewhat more elaborated than presented in Progress Report 1/1996, and rectifies a few mistakes appearing in that report. It also gives a brief introduction to the solution methodology of the disc retized governing equations. (orig.) (12 refs.)

  10. A neural computational model for animal's time-to-collision estimation.

    Science.gov (United States)

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  11. Theoretical Basics of Teaching Discrete Mathematics

    Directory of Open Access Journals (Sweden)

    Y. A. Perminov

    2012-01-01

    Full Text Available  The paper deals with the research findings concerning the process of mastering the theoretical basics of discrete mathematics by the students of vocational pedagogic profile. The methodological analysis is based on the subject and functions of the modern discrete mathematics and its role in mathematical modeling and computing. The modern discrete mathematics (i.e. mathematics of the finite type structures plays the important role in modernization of vocational training. It is especially rele- vant to training students for vocational pedagogic qualifications, as in the future they will be responsible for training the middle and the senior level specialists in engineer- ing and technical spheres. Nowadays in different industries, there arise the problems which require for their solving both continual – based on the classical mathematical methods – and discrete modeling. The teaching course of discrete mathematics for the future vocational teachers should be relevant to the target qualification and aimed at mastering the mathematical modeling, systems of computer mathematics and computer technologies. The author emphasizes the fundamental role of mastering the language of algebraic and serial structures, as well as the logical, algorithmic, combinatory schemes dominating in dis- crete mathematics. The guidelines for selecting the content of the course in discrete mathematics are specified. The theoretical findings of the research can be put into practice whilst developing curricula and working programs for bachelors and masters’ training. 

  12. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review ...

    African Journals Online (AJOL)

    K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review. ... Modelling has become a visible tool in many disciplines including marketing and several marketing models have ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  13. Hybrid quantum teleportation: A theoretical model

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira [Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2014-12-04

    Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.

  14. Individual Tariffs for Mobile Services: Theoretical Framework and a Computational Case in Mobile Music

    OpenAIRE

    Chen, Hong; Pau, Louis-François

    2007-01-01

    textabstractThis paper introduces individual tariffs at service and content bundle level in mobile communications. It gives a theoretical framework (economic, sociological) as well as a computational game solution method. The user can be an individual or a community. Individual tariffs are decided through interactions between the user and the supplier. A numerical example from mobile music illustrates the concepts.

  15. Stability and Bifurcation Analysis of a Modified Epidemic Model for Computer Viruses

    Directory of Open Access Journals (Sweden)

    Chuandong Li

    2014-01-01

    Full Text Available We extend the three-dimensional SIR model to four-dimensional case and then analyze its dynamical behavior including stability and bifurcation. It is shown that the new model makes a significant improvement to the epidemic model for computer viruses, which is more reasonable than the most existing SIR models. Furthermore, we investigate the stability of the possible equilibrium point and the existence of the Hopf bifurcation with respect to the delay. By analyzing the associated characteristic equation, it is found that Hopf bifurcation occurs when the delay passes through a sequence of critical values. An analytical condition for determining the direction, stability, and other properties of bifurcating periodic solutions is obtained by using the normal form theory and center manifold argument. The obtained results may provide a theoretical foundation to understand the spread of computer viruses and then to minimize virus risks.

  16. Digital Geometry Algorithms Theoretical Foundations and Applications to Computational Imaging

    CERN Document Server

    Barneva, Reneta

    2012-01-01

    Digital geometry emerged as an independent discipline in the second half of the last century. It deals with geometric properties of digital objects and is developed with the unambiguous goal to provide rigorous theoretical foundations for devising new advanced approaches and algorithms for various problems of visual computing. Different aspects of digital geometry have been addressed in the literature. This book is the first one that explicitly focuses on the presentation of the most important digital geometry algorithms. Each chapter provides a brief survey on a major research area related to the general volume theme, description and analysis of related fundamental algorithms, as well as new original contributions by the authors. Every chapter contains a section in which interesting open problems are addressed.

  17. Deferred Action: Theoretical model of process architecture design for emergent business processes

    Directory of Open Access Journals (Sweden)

    Patel, N.V.

    2007-01-01

    Full Text Available E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed.

  18. A new theoretical model for scattering of electrons by molecules. 1

    International Nuclear Information System (INIS)

    Peixoto, E.M.A.; Mu-tao, L.; Nogueira, J.C.

    1975-01-01

    A new theoretical model for electron-molecule scattering is suggested. The e-H 2 scattering is studied and the superiority of the new model over the commonly used Independent Atom Model (IAM) is demonstrated. Comparing theoretical and experimental data for 40keV electrons scattered by H 2 utilizing the new model, its validity is proved, while Partial Wave and First Born calculations, employing the Independent Atom Model, strongly deviated from the experiment [pt

  19. A theoretical reassessment of microbial maintenance and implications for microbial ecology modeling.

    Science.gov (United States)

    Wang, Gangsheng; Post, Wilfred M

    2012-09-01

    We attempted to reconcile three microbial maintenance models (Herbert, Pirt, and Compromise) through a theoretical reassessment. We provided a rigorous proof that the true growth yield coefficient (Y(G)) is the ratio of the specific maintenance rate (a in Herbert) to the maintenance coefficient (m in Pirt). Other findings from this study include: (1) the Compromise model is identical to the Herbert for computing microbial growth and substrate consumption, but it expresses the dependence of maintenance on both microbial biomass and substrate; (2) the maximum specific growth rate in the Herbert (μ(max,H)) is higher than those in the other two models (μ(max,P) and μ(max,C)), and the difference is the physiological maintenance factor (m(q) = a); and (3) the overall maintenance coefficient (m(T)) is more sensitive to m(q) than to the specific growth rate (μ(G)) and Y(G). Our critical reassessment of microbial maintenance provides a new approach for quantifying some important components in soil microbial ecology models. © This article is a US government work and is in the public domain in the USA.

  20. N-barN interaction theoretical models

    International Nuclear Information System (INIS)

    Loiseau, B.

    1991-12-01

    In the framework of antinucleon-nucleon interaction theoretical models, our present understanding on the N-barN interaction is discussed, either from quark- or/and meson- and baryon-degrees of freedom, by considering the N-barN annihilation into mesons and the N-barN elastic and charge-exchange scattering. (author) 52 refs., 11 figs., 2 tabs

  1. Quantitative fluorescence lifetime spectroscopy in turbid media: comparison of theoretical, experimental and computational methods

    International Nuclear Information System (INIS)

    Vishwanath, Karthik; Mycek, Mary-Ann; Pogue, Brian

    2002-01-01

    A Monte Carlo model developed to simulate time-resolved fluorescence propagation in a semi-infinite turbid medium was validated against previously reported theoretical and computational results. Model simulations were compared to experimental measurements of fluorescence spectra and lifetimes on tissue-simulating phantoms for single and dual fibre-optic probe geometries. Experiments and simulations using a single probe revealed that scattering-induced artefacts appeared in fluorescence emission spectra, while fluorescence lifetimes were unchanged. Although fluorescence lifetime measurements are generally more robust to scattering artefacts than are measurements of fluorescence spectra, in the dual-probe geometry scattering-induced changes in apparent lifetime were predicted both from diffusion theory and via Monte Carlo simulation, as well as measured experimentally. In all cases, the recovered apparent lifetime increased with increasing scattering and increasing source-detector separation. Diffusion theory consistently underestimated the magnitude of these increases in apparent lifetime (predicting a maximum increase of ∼15%), while Monte Carlo simulations and experiment were closely matched (showing increases as large as 30%). These results indicate that quantitative simulations of time-resolved fluorescence propagation in turbid media will be important for accurate recovery of fluorophore lifetimes in biological spectroscopy and imaging applications. (author)

  2. Theoretical basis for graphite stress analysis in BERSAFE

    International Nuclear Information System (INIS)

    Harper, P.G.

    1980-03-01

    The BERSAFE finite element computer program for structural analysis has been extended to deal with structures made from irradiated graphite. This report describes the material behaviour which has been modelled and gives the theoretical basis for the solution procedure. (author)

  3. Theoretical and computational studies of excitons in conjugated polymers

    Science.gov (United States)

    Barford, William; Bursill, Robert J.; Smith, Richard W.

    2002-09-01

    We present a theoretical and computational analysis of excitons in conjugated polymers. We use a tight-binding model of π-conjugated electrons, with 1/r interactions for large r. In both the weak-coupling limit (defined by W>>U) and the strong-coupling limit (defined by Wparticle models. We compare these to density matrix renormalization group (DMRG) calculations, and find good agreement in the extreme limits. We use these analytical results to interpret the DMRG calculations in the intermediate-coupling regime (defined by W~U), most applicable to conjugated polymers. We make the following conclusions. (1) In the weak-coupling limit the bound states are Mott-Wannier excitons, i.e., conduction-band electrons bound to valence-band holes. Singlet and triplet excitons whose relative wave functions are odd under a reflection of the relative coordinate are degenerate. Thus, the 2 1A+g and 1 3A-g states are degenerate in this limit. (2) In the strong-coupling limit the bound states are Mott-Hubbard excitons, i.e., particles in the upper Hubbard band bound to holes in the lower Hubbard band. These bound states occur in doublets of even and odd parity excitons. Triplet excitons are magnons bound to the singlet excitons, and hence are degenerate with their singlet counterparts. (3) In the intermediate-coupling regime Mott-Wannier excitons are the more appropriate description for large dimerization, while for the undimerized chain Mott-Hubbard excitons are the correct description. For dimerizations relevant to polyacetylene and polydiacetylene both Mott-Hubbard and Mott-Wannier excitons are present. (4) For all coupling strengths an infinite number of bound states exist for 1/r interactions for an infinite polymer. As a result of the discreteness of the lattice and the restrictions on the exciton wave functions in one dimension, the progression of states does not follow the Rydberg series. In practice, excitons whose particle-hole separation exceeds the length of the polymer

  4. Theoretical models for development competence of health protection and promotion

    Directory of Open Access Journals (Sweden)

    Cesnaviciene J.

    2014-01-01

    Full Text Available The competence of health protection and promotion are mentioned in various legislative documents that regulate areas of education and health policy. The researches on health conditions of Lithuania Country's population disclosed the deteriorating health status of the society, even of the children. It has also been found that the focus on health education is not adequate. The number of National and International health programmes have been realized and educational methodological tools prepared in Lithuania, however the insufficient attention to the health promotion models is been noticed. The objectiveof this article is to discuss the theoretical models used in health education field. The questions to be answered: what theoretical models are used in order to development competence of health protection and promotion? Who does employ particular models? What are the advantages of various models? What conceptions unite and characterize theoretical models? The analysis of scientific literature revealed the number of diverse health promotion model; however none of them is dominant. Some of the models focus on intrapersonal, others on interpersonal or community level but in general they can be distinguished as cognitive – behavioural models which are characterized by three main conceptions: 1 the healthy living is determined by the perceived health related knowledge: what is known and understood would influence the behaviour; 2 the knowledge in healthy living field is essential but insufficient condition for behaviour change; 3 the great influence to healthy living life style is done by perception, motivation, skills and habits as well as social environment. These are the components that are typical to all theoretical models and that reflect the hole of the conditions influencing healthy living.

  5. Computational and theoretical studies of globular proteins

    Science.gov (United States)

    Pagan, Daniel L.

    Protein crystallization is often achieved in experiment through a trial and error approach. To date, there exists a dearth of theoretical understanding of the initial conditions necessary to promote crystallization. While a better understanding of crystallization will help to create good crystals suitable for structure analysis, it will also allow us to prevent the onset of certain diseases. The core of this thesis is to model and, ultimately, understand the phase behavior of protein particles in solution. Toward this goal, we calculate the fluid-fluid coexistence curve in the vicinity of the metastable critical point of the modified Lennard-Jones potential, where it has been shown that nucleation is increased by many orders of magnitude. We use finite-size scaling techniques and grand canonical Monte Carlo simulation methods. This has allowed us to pinpoint the critical point and subcritical region with high accuracy in spite of the critical fluctuations that hinder sampling using other Monte Carlo techniques. We also attempt to model the phase behavior of the gamma-crystallins, mutations of which have been linked to genetic cataracts. The complete phase behavior of the square well potential at the ranges of attraction lambda = 1.15 and lambda = 1.25 is calculated and compared with that of the gammaII-crystallin. The role of solvent is also important in the crystallization process and affects the phase behavior of proteins in solution. We study a model that accounts for the contribution of the solvent free-energy to the free-energy of globular proteins. This model allows us to model phase behavior that includes solvent.

  6. Complementing theoretical biochemistry with the uso of computer aids (Symposium

    Directory of Open Access Journals (Sweden)

    R Herrera

    2012-05-01

    Full Text Available Teaching  biochemistry  in  the  current  state  of  science  and  society  requires  a  special motivation for learning, especially for students where Biochemistry is one of the courses on  their  careers.  The  traditional  way  of  teaching,  based  on  the  teacher-student relationship,  mostly  unidirectional,  does  not  fulfil  the  needs  imposed  in  this  era. Considering  the  current  situation,  University  students  require  new  abilities  in  their training  and  the  use  of  computers  can  be  a  facility  for  discovering  and  research, enabling the experience of new and  diverse situations. The design of teaching material for undergraduate students who take biochemistry as complementary course should be seen  as  an  opportunity  to  complement  theoretical  aspect  on  the  current  courses.  We have used three different approaches: (I Modelling proteins indicating key motifs at the three-dimensional structure and residues where inhibitors can be attach. (II Generation of  activities  by  the  use  of  sensors.  And  (III  elaborating  active  quizzes  where  students can  be  drive  on  their  learning.  Building  knowledge  based  on  practical  experience  can improve  student’s  competence  on  basic  science  and  the  learning  process  can  be complemented in the use of dynamics models.

  7. A theoretical model of multielectrode DBR lasers

    DEFF Research Database (Denmark)

    Pan, Xing; Olesen, Henning; Tromborg, Bjarne

    1988-01-01

    A theoretical model for two- and three-section tunable distributed Bragg reflector (DBR) lasers is presented. The static tuning properties are studied in terms of threshold current, linewidth, oscillation frequency, and output power. Regions of continuous tuning for three-section DBR lasers...

  8. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  9. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    International Nuclear Information System (INIS)

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  10. A theoretical model of semi-elliptic surface crack growth

    Directory of Open Access Journals (Sweden)

    Shi Kaikai

    2014-06-01

    Full Text Available A theoretical model of semi-elliptic surface crack growth based on the low cycle strain damage accumulation near the crack tip along the cracking direction and the Newman–Raju formula is developed. The crack is regarded as a sharp notch with a small curvature radius and the process zone is assumed to be the size of cyclic plastic zone. The modified Hutchinson, Rice and Rosengren (HRR formulations are used in the presented study. Assuming that the shape of surface crack front is controlled by two critical points: the deepest point and the surface point. The theoretical model is applied to semi-elliptic surface cracked Al 7075-T6 alloy plate under cyclic loading, and five different initial crack shapes are discussed in present study. Good agreement between experimental and theoretical results is obtained.

  11. Computational modeling of geometry dependent phonon transport in silicon nanostructures

    Science.gov (United States)

    Cheney, Drew A.

    Recent experiments have demonstrated that thermal properties of semiconductor nanostructures depend on nanostructure boundary geometry. Phonons are quantized mechanical vibrations that are the dominant carrier of heat in semiconductor materials and their aggregate behavior determine a nanostructure's thermal performance. Phonon-geometry scattering processes as well as waveguiding effects which result from coherent phonon interference are responsible for the shape dependence of thermal transport in these systems. Nanoscale phonon-geometry interactions provide a mechanism by which nanostructure geometry may be used to create materials with targeted thermal properties. However, the ability to manipulate material thermal properties via controlling nanostructure geometry is contingent upon first obtaining increased theoretical understanding of fundamental geometry induced phonon scattering processes and having robust analytical and computational models capable of exploring the nanostructure design space, simulating the phonon scattering events, and linking the behavior of individual phonon modes to overall thermal behavior. The overall goal of this research is to predict and analyze the effect of nanostructure geometry on thermal transport. To this end, a harmonic lattice-dynamics based atomistic computational modeling tool was created to calculate phonon spectra and modal phonon transmission coefficients in geometrically irregular nanostructures. The computational tool is used to evaluate the accuracy and regimes of applicability of alternative computational techniques based upon continuum elastic wave theory. The model is also used to investigate phonon transmission and thermal conductance in diameter modulated silicon nanowires. Motivated by the complexity of the transmission results, a simplified model based upon long wavelength beam theory was derived and helps explain geometry induced phonon scattering of low frequency nanowire phonon modes.

  12. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  13. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  14. Computational Modeling of Cobalt-based Water Oxidation: Current Status and Future Challenges

    Science.gov (United States)

    Schilling, Mauro; Luber, Sandra

    2018-04-01

    A lot of effort is nowadays put into the development of novel water oxidation catalysts. In this context mechanistic studies are crucial in order to elucidate the reaction mechanisms governing this complex process, new design paradigms and strategies how to improve the stability and efficiency of those catalysis. This review is focused on recent theoretical mechanistic studies in the field of homogeneous cobalt-based water oxidation catalysts. In the first part, computational methodologies and protocols are summarized and evaluated on the basis of their applicability towards real catalytic or smaller model systems, whereby special emphasis is laid on the choice of an appropriate model system. In the second part, an overview of mechanistic studies is presented, from which conceptual guidelines are drawn on how to approach novel studies of catalysts and how to further develop the field of computational modeling of water oxidation reactions.

  15. Computational Modeling of Cobalt-Based Water Oxidation: Current Status and Future Challenges

    Directory of Open Access Journals (Sweden)

    Mauro Schilling

    2018-04-01

    Full Text Available A lot of effort is nowadays put into the development of novel water oxidation catalysts. In this context, mechanistic studies are crucial in order to elucidate the reaction mechanisms governing this complex process, new design paradigms and strategies how to improve the stability and efficiency of those catalysts. This review is focused on recent theoretical mechanistic studies in the field of homogeneous cobalt-based water oxidation catalysts. In the first part, computational methodologies and protocols are summarized and evaluated on the basis of their applicability toward real catalytic or smaller model systems, whereby special emphasis is laid on the choice of an appropriate model system. In the second part, an overview of mechanistic studies is presented, from which conceptual guidelines are drawn on how to approach novel studies of catalysts and how to further develop the field of computational modeling of water oxidation reactions.

  16. A review of game-theoretic models of road user behaviour.

    Science.gov (United States)

    Elvik, Rune

    2014-01-01

    This paper reviews game-theoretic models that have been developed to explain road user behaviour in situations where road users interact with each other. The paper includes the following game-theoretic models: 1.A general model of the interaction between road users and their possible reaction to measures improving safety (behavioural adaptation).2.Choice of vehicle size as a Prisoners’ dilemma game.3.Speed choice as a co-ordination game.4.Speed compliance as a game between drivers and the police.5.Merging into traffic from an acceleration lane as a mixed-strategy game.6.Choice of level of attention in following situations as an evolutionary game.7.Choice of departure time to avoid congestion as variant of a Prisoners’ dilemma game.8.Interaction between cyclists crossing the road and car drivers.9.Dipping headlights at night well ahead of the point when glare becomes noticeable.10.Choice of evasive action in a situation when cars are on collision course. The models reviewed are different in many respects, but a common feature of the models is that they can explain how informal norms of behaviour can develop among road users and be sustained even if these informal norms violate the formal regulations of the traffic code. Game-theoretic models are not applicable to every conceivable interaction between road users or to situations in which road users choose behaviour without interacting with other road users. Nevertheless, it is likely that game-theoretic models can be applied more widely than they have been until now. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  18. Irrigant flow in the root canal: experimental validation of an unsteady Computational Fluid Dynamics model using high-speed imaging.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Versluis, M; Kastrinakis, E; van der Sluis, L W M

    2010-05-01

    To compare the results of a Computational Fluid Dynamics (CFD) simulation of the irrigant flow within a prepared root canal, during final irrigation with a syringe and a needle, with experimental high-speed visualizations and theoretical calculations of an identical geometry and to evaluate the effect of off-centre positioning of the needle inside the root canal. A CFD model was created to simulate irrigant flow from a side-vented needle inside a prepared root canal. Calculations were carried out for four different positions of the needle inside a prepared root canal. An identical root canal model was made from poly-dimethyl-siloxane (PDMS). High-speed imaging of the flow seeded with particles and Particle Image Velocimetry (PIV) were combined to obtain the velocity field inside the root canal experimentally. Computational, theoretical and experimental results were compared to assess the validity of the computational model. Comparison between CFD computations and experiments revealed good agreement in the velocity magnitude and vortex location and size. Small lateral displacements of the needle inside the canal had a limited effect on the flow field. High-speed imaging experiments together with PIV of the flow inside a simulated root canal showed a good agreement with the CFD model, even though the flow was unsteady. Therefore, the CFD model is able to predict reliably the flow in similar domains.

  19. ComPLuS Model: A New Insight in Pupils' Collaborative Talk, Actions and Balance during a Computer-Mediated Music Task

    Science.gov (United States)

    Nikolaidou, Georgia N.

    2012-01-01

    This exploratory work describes and analyses the collaborative interactions that emerge during computer-based music composition in the primary school. The study draws on socio-cultural theories of learning, originated within Vygotsky's theoretical context, and proposes a new model, namely Computer-mediated Praxis and Logos under Synergy (ComPLuS).…

  20. Computers and theoretical physics

    International Nuclear Information System (INIS)

    Terrano, A.E.

    1987-01-01

    The outline of the lectures is as follows: 1) The architecture of conventional computers. 2) The design of special-purpose machines. 3) Elements of modern programming. 4) Algebraic and interactive programs. (orig./BBO)

  1. EXPERIMENTAL AND THEORETICAL FOUNDATIONS AND PRACTICAL IMPLEMENTATION OF TECHNOLOGY BRAIN-COMPUTER INTERFACE

    Directory of Open Access Journals (Sweden)

    A. Ya. Kaplan

    2013-01-01

    Full Text Available Technology brain-computer interface (BCI allow saperson to learn how to control external devices via thevoluntary regulation of own EEG directly from the brain without the involvement in the process of nerves and muscles. At the beginning the main goal of BCI was to replace or restore motor function to people disabled by neuromuscular disorders. Currently, the task of designing the BCI increased significantly, more capturing different aspects of life a healthy person. This article discusses the theoretical, experimental and technological base of BCI development and systematized critical fields of real implementation of these technologies.

  2. Towards a theoretical framework for analyzing complex linguistic networks

    CERN Document Server

    Lücking, Andy; Banisch, Sven; Blanchard, Philippe; Job, Barbara

    2016-01-01

    The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities.This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statisticalmodels of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information scien...

  3. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  4. The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories

    Directory of Open Access Journals (Sweden)

    Robert Logozar

    2011-12-01

    Full Text Available We present the modeling of dynamical systems and finding of their complexity indicators by the use of concepts from computation and information theories, within the framework of J. P. Crutchfield's theory of  ε-machines. A short formal outline of the  ε-machines is given. In this approach, dynamical systems are analyzed directly from the time series that is received from a properly adjusted measuring instrument. The binary strings are parsed through the parse tree, within which morphologically and probabilistically unique subtrees or morphs are recognized as system states. The outline and precise interrelation of the information-theoretic entropies and complexities emanating from the model is given. The paper serves also as a theoretical foundation for the future presentation of the DSA program that implements the  ε-machines modeling up to the stochastic finite automata level.

  5. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  6. Ab-initio modeling of an iron laser-induced plasma: Comparison between theoretical and experimental atomic emission spectra

    International Nuclear Information System (INIS)

    Colgan, J.; Judge, E.J.; Kilcrease, D.P.; Barefield, J.E.

    2014-01-01

    We report on efforts to model the Fe emission spectrum generated from laser-induced breakdown spectroscopy (LIBS) measurements on samples of pure iron oxide (Fe 2 O 3 ). Our modeling efforts consist of several components. We begin with ab-initio atomic structure calculations performed by solving the Hartree–Fock equations for the neutral and singly ionized stages of Fe. Our energy levels are then adjusted to their experimentally known values. The atomic transition probabilities and atomic collision quantities are also computed in an ab-initio manner. We perform LTE or non-LTE calculations that generate level populations and, subsequently, an emission spectrum for the iron plasma for a range of electron temperatures and electron densities. Such calculations are then compared to the experimental spectrum. We regard our work as a preliminary modeling effort that ultimately strives towards the modeling of emission spectra from even more complex samples where less atomic data are available. - Highlights: • LIBS plasma of iron oxide • Ab-initio theoretical Modeling • Discussion of LTE versus non-LTE criteria and assessment • Boltzmann plots for Fe—determination of when LTE is a valid assumption • Emission spectra for Fe—comparison of theoretical modeling and measurement: good agreement obtained

  7. Mechanisms of Neurofeedback: A Computation-theoretic Approach.

    Science.gov (United States)

    Davelaar, Eddy J

    2018-05-15

    Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Computational modeling of human oral bioavailability: what will be next?

    Science.gov (United States)

    Cabrera-Pérez, Miguel Ángel; Pham-The, Hai

    2018-06-01

    The oral route is the most convenient way of administrating drugs. Therefore, accurate determination of oral bioavailability is paramount during drug discovery and development. Quantitative structure-property relationship (QSPR), rule-of-thumb (RoT) and physiologically based-pharmacokinetic (PBPK) approaches are promising alternatives to the early oral bioavailability prediction. Areas covered: The authors give insight into the factors affecting bioavailability, the fundamental theoretical framework and the practical aspects of computational methods for predicting this property. They also give their perspectives on future computational models for estimating oral bioavailability. Expert opinion: Oral bioavailability is a multi-factorial pharmacokinetic property with its accurate prediction challenging. For RoT and QSPR modeling, the reliability of datasets, the significance of molecular descriptor families and the diversity of chemometric tools used are important factors that define model predictability and interpretability. Likewise, for PBPK modeling the integrity of the pharmacokinetic data, the number of input parameters, the complexity of statistical analysis and the software packages used are relevant factors in bioavailability prediction. Although these approaches have been utilized independently, the tendency to use hybrid QSPR-PBPK approaches together with the exploration of ensemble and deep-learning systems for QSPR modeling of oral bioavailability has opened new avenues for development promising tools for oral bioavailability prediction.

  9. Theoretical study of diaquamalonatozinc(II) single crystal for ...

    Indian Academy of Sciences (India)

    MITESH CHAKRABORTY

    2017-11-28

    Nov 28, 2017 ... 2Laser and Spectroscopy Laboratory, Department of Applied Physics, Indian Institute of ... The aim of the present paper is to employ theoretical methods to investigate the zero field splitting .... using quantum chemistry computational models has ..... The authors are grateful to the Science and Engineer-.

  10. The history of theoretical, material and computational mechanics mathematics meets mechanics and engineering

    CERN Document Server

    2014-01-01

    This collection of 23 articles is the output of lectures in special sessions on “The History of Theoretical, Material and Computational Mechanics” within the yearly conferences of the GAMM in the years 2010 in Karlsruhe, Germany, 2011 in Graz, Austria, and in 2012 in Darmstadt, Germany; GAMM is the “Association for Applied Mathematics and Mechanics”, founded in 1922 by Ludwig Prandtl and Richard von Mises. The contributions in this volume discuss different aspects of mechanics. They are related to solid and fluid mechanics in general and to specific problems in these areas including the development of numerical solution techniques. In the first part the origins and developments of conservation principles in mechanics and related variational methods are treated together with challenging applications from the 17th to the 20th century. Part II treats general and more specific aspects of material theories of deforming solid continua and porous soils. and Part III presents important theoretical and enginee...

  11. Patients’ Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test

    Science.gov (United States)

    Dou, Kaili; Yu, Ping; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-01-01

    Background Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. Objective The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients’ acceptance of smartphone health technology for chronic disease management. Methods Multiple theories and factors that may influence patients’ acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients’ acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients’ actual use of a smartphone health app. The partial least square method was used to test the theoretical model. Results The model accounted for .412 of the variance in patients’ intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients’ smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients’ intentions to use the technology. Age and gender had no significant influence on patients’ acceptance of smartphone technology. The study also

  12. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review

    African Journals Online (AJOL)

    Toshiba

    experimental design for theoretical modelling of sales force compensation is vivid and ... different from the concept of a model in decision support systems and behavioural .... ―refers to the fact that people may not optimize.‖ This, of course, is.

  13. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  14. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  15. Some Model Theoretic Remarks on Bass Modules

    Directory of Open Access Journals (Sweden)

    E. Momtahan

    2011-09-01

    Full Text Available We study Bass modules, Bass rings, and related concepts from a model theoretic point of view. We observe that the class of Bass modules (over a fixed ring is not stable under elementary equivalence. We observe that under which conditions the class of Bass rings are stable under elementary equivalence.

  16. PREFACE: 4th Workshop on Theory, Modelling and Computational Methods for Semiconductors (TMCSIV)

    Science.gov (United States)

    Tomić, Stanko; Probert, Matt; Migliorato, Max; Pal, Joydeep

    2014-06-01

    These conference proceedings contain the written papers of the contributions presented at the 4th International Conference on Theory, Modelling and Computational Methods for Semiconductor materials and nanostructures. The conference was held at the MediaCityUK, University of Salford, Manchester, UK on 22-24 January 2014. The previous conferences in this series took place in 2012 at the University of Leeds, in 2010 at St William's College, York and in 2008 at the University of Manchester, UK. The development of high-performance computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational, optical and electronic properties of semiconductors and their hetero- and nano-structures. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in semiconductor science and technology, where there is substantial potential for time-saving in R&D. Theoretical approaches represented in this meeting included: Density Functional Theory, Semi-empirical Electronic Structure Methods, Multi-scale Approaches, Modelling of PV devices, Electron Transport, and Graphene. Topics included, but were not limited to: Optical Properties of Quantum Nanostructures including Colloids and Nanotubes, Plasmonics, Magnetic Semiconductors, Photonic Structures, and Electronic Devices. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the theoretical modelling of Group IV, III-V and II-VI semiconductors, as well as students, postdocs and early-career researchers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students, with several lectures given by recognized experts in various theoretical approaches. The following two days showcased some of the best theoretical research carried out in the UK in this field, with several contributions also from representatives of

  17. Testing a theoretical model of clinical nurses' intent to stay.

    Science.gov (United States)

    Cowden, Tracy L; Cummings, Greta G

    2015-01-01

    Published theoretical models of nurses' intent to stay (ITS) report inconsistent outcomes, and not all hypothesized models have been adequately tested. Research has focused on cognitive rather than emotional determinants of nurses' ITS. The aim of this study was to empirically verify a complex theoretical model of nurses' ITS that includes both affective and cognitive determinants and to explore the influence of relational leadership on staff nurses' ITS. The study was a correlational, mixed-method, nonexperimental design. A subsample of the Quality Work Environment Study survey data 2009 (n = 415 nurses) was used to test our theoretical model of clinical nurses' ITS as a structural equation model. The model explained 63% of variance in ITS. Organizational commitment, empowerment, and desire to stay were the model concepts with the strongest effects on nurses' ITS. Leadership practices indirectly influenced ITS. How nurses evaluate and respond to their work environment is both an emotional and rational process. Health care organizations need to be cognizant of the influence that nurses' feelings and views of their work setting have on their intention decisions and integrate that knowledge into the development of retention strategies. Leadership practices play an important role in staff nurses' perceptions of the workplace. Identifying the mechanisms by which leadership influences staff nurses' intentions to stay presents additional focus areas for developing retention strategies.

  18. A field theoretic model for static friction

    OpenAIRE

    Mahyaeh, I.; Rouhani, S.

    2013-01-01

    We present a field theoretic model for friction, where the friction coefficient between two surfaces may be calculated based on elastic properties of the surfaces. We assume that the geometry of contact surface is not unusual. We verify Amonton's laws to hold that friction force is proportional to the normal load.This model gives the opportunity to calculate the static coefficient of friction for a few cases, and show that it is in agreement with observed values. Furthermore we show that the ...

  19. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  20. Theoretical optical spectroscopy of complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Conte, A. Mosca, E-mail: adriano.mosca.conte@roma2.infn.it [MIFP, NAST, ETSF,CNR INFM-SMC, Universitá di Roma Tor Vergata, Via della Ricerca Scientifica 1, Roma (Italy); Violante, C., E-mail: claudia.violante@roma2.infn.it [MIFP, NAST, ETSF,CNR INFM-SMC, Universitá di Roma Tor Vergata, Via della Ricerca Scientifica 1, Roma (Italy); Missori, M., E-mail: mauro.missori@isc.cnr.it [Istituto dei Sistemi Complessi, Consiglio Nazionale delle Ricerche, Via Salaria Km 29.300, 00016 Monterotondo Scalo (Rome) (Italy); Bechstedt, F., E-mail: bech@ifto.physik.uni-jena.de [Institut fur Festkorpertheorie und -optik, Friedrich-Schiller-Universitat, Max-Wien-Platz 1, 07743 Jena (Germany); Teodonio, L. [MIFP, NAST, ETSF,CNR INFM-SMC, Universitá di Roma Tor Vergata, Via della Ricerca Scientifica 1, Roma (Italy); Istituto centrale per il restauro e la conservazione del patrimonio archivistico e librario (IC-RCPAL), Italian Minister for Cultural Heritage, Via Milano 76, 00184 Rome (Italy); Ippoliti, E.; Carloni, P. [German Research School for Simulation Sciences, Julich (Germany); Guidoni, L., E-mail: leonardo.guidoni@univaq.it [Università degli Studi di L’Aquila, Dipartimento di Chimica e Materiali, Via Campo di Pile, 67100 L’Aquila (Italy); Pulci, O., E-mail: olivia.pulci@roma2.infn.it [MIFP, NAST, ETSF,CNR INFM-SMC, Universitá di Roma Tor Vergata, Via della Ricerca Scientifica 1, Roma (Italy)

    2013-08-15

    Highlights: ► We review some theoretical condensed matter ab initio spectroscopic computational techniques. ► We show several applications ranging from 0 to 3 dimensional systems. ► For each system studied, we show which kind of information it is possible to obtain by performing these calculations. -- Abstract: We review here some of the most reliable and efficient computational theoretical ab initio techniques for the prediction of optical and electronic spectroscopic properties and show some important applications to molecules, surfaces, and solids. We investigate the role of the solvent in the optical absorption spectrum of indole molecule. We study the excited-state properties of a photo-active minimal model molecule for the retinal of rhodopsin, responsible for vision mechanism in animals. We then show a study about spectroscopic properties of Si(1 1 1) surface. Finally we simulate a bulk system: paper, that is mainly made of cellulose, a pseudo-crystalline material representing 40% of annual biomass production in the Earth.

  1. Theoretical optical spectroscopy of complex systems

    International Nuclear Information System (INIS)

    Conte, A. Mosca; Violante, C.; Missori, M.; Bechstedt, F.; Teodonio, L.; Ippoliti, E.; Carloni, P.; Guidoni, L.; Pulci, O.

    2013-01-01

    Highlights: ► We review some theoretical condensed matter ab initio spectroscopic computational techniques. ► We show several applications ranging from 0 to 3 dimensional systems. ► For each system studied, we show which kind of information it is possible to obtain by performing these calculations. -- Abstract: We review here some of the most reliable and efficient computational theoretical ab initio techniques for the prediction of optical and electronic spectroscopic properties and show some important applications to molecules, surfaces, and solids. We investigate the role of the solvent in the optical absorption spectrum of indole molecule. We study the excited-state properties of a photo-active minimal model molecule for the retinal of rhodopsin, responsible for vision mechanism in animals. We then show a study about spectroscopic properties of Si(1 1 1) surface. Finally we simulate a bulk system: paper, that is mainly made of cellulose, a pseudo-crystalline material representing 40% of annual biomass production in the Earth

  2. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  3. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    Science.gov (United States)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  4. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  5. Electron Scattering in Solid Matter A Theoretical and Computational Treatise

    CERN Document Server

    Zabloudil, Jan; Szunyogh, Laszlo

    2005-01-01

    Addressing graduate students and researchers, this book gives a very detailed theoretical and computational description of multiple scattering in solid matter. Particular emphasis is placed on solids with reduced dimensions, on full potential approaches and on relativistic treatments. For the first time approaches such as the Screened Korringa-Kohn-Rostoker method that have emerged during the last 5 – 10 years are reviewed, considering all formal steps such as single-site scattering, structure constants and screening transformations, and also the numerical point of view. Furthermore, a very general approach is presented for solving the Poisson equation, needed within density functional theory in order to achieve self-consistency. Going beyond ordered matter and translationally invariant systems, special chapters are devoted to the Coherent Potential Approximation and to the Embedded Cluster Method, used, for example, for describing nanostructured matter in real space. In a final chapter, physical properties...

  6. Theoretical Relevance of Neuropsychological Data for Connectionist Modelling

    Directory of Open Access Journals (Sweden)

    Mauricio Iza

    2011-05-01

    Full Text Available The symbolic information-processing paradigm in cognitive psychology has met a growing challenge from neural network models over the past two decades. While neuropsychological
    evidence has been of great utility to theories concerned with information processing, the real question is, whether the less rigid connectionist models provide valid, or enough, information
    concerning complex cognitive structures. In this work, we will discuss the theoretical implications that neuropsychological data posits for modelling cognitive systems.

  7. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  8. NACHOS: a finite element computer program for incompressible flow problems. Part I. Theoretical background

    International Nuclear Information System (INIS)

    Gartling, D.K.

    1978-04-01

    The theoretical background for the finite element computer program, NACHOS, is presented in detail. The NACHOS code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer. A general description of the fluid/thermal boundary value problems treated by the program is described. The finite element method and the associated numerical methods used in the NACHOS code are also presented. Instructions for use of the program are documented in SAND77-1334

  9. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  10. Computational modeling, optimization and manufacturing simulation of advanced engineering materials

    CERN Document Server

    2016-01-01

    This volume presents recent research work focused in the development of adequate theoretical and numerical formulations to describe the behavior of advanced engineering materials.  Particular emphasis is devoted to applications in the fields of biological tissues, phase changing and porous materials, polymers and to micro/nano scale modeling. Sensitivity analysis, gradient and non-gradient based optimization procedures are involved in many of the chapters, aiming at the solution of constitutive inverse problems and parameter identification. All these relevant topics are exposed by experienced international and inter institutional research teams resulting in a high level compilation. The book is a valuable research reference for scientists, senior undergraduate and graduate students, as well as for engineers acting in the area of computational material modeling.

  11. Adaptive Remodeling of Achilles Tendon: A Multi-scale Computational Model.

    Directory of Open Access Journals (Sweden)

    Stuart R Young

    2016-09-01

    Full Text Available While it is known that musculotendon units adapt to their load environments, there is only a limited understanding of tendon adaptation in vivo. Here we develop a computational model of tendon remodeling based on the premise that mechanical damage and tenocyte-mediated tendon damage and repair processes modify the distribution of its collagen fiber lengths. We explain how these processes enable the tendon to geometrically adapt to its load conditions. Based on known biological processes, mechanical and strain-dependent proteolytic fiber damage are incorporated into our tendon model. Using a stochastic model of fiber repair, it is assumed that mechanically damaged fibers are repaired longer, whereas proteolytically damaged fibers are repaired shorter, relative to their pre-damage length. To study adaptation of tendon properties to applied load, our model musculotendon unit is a simplified three-component Hill-type model of the human Achilles-soleus unit. Our model results demonstrate that the geometric equilibrium state of the Achilles tendon can coincide with minimization of the total metabolic cost of muscle activation. The proposed tendon model independently predicts rates of collagen fiber turnover that are in general agreement with in vivo experimental measurements. While the computational model here only represents a first step in a new approach to understanding the complex process of tendon remodeling in vivo, given these findings, it appears likely that the proposed framework may itself provide a useful theoretical foundation for developing valuable qualitative and quantitative insights into tendon physiology and pathology.

  12. Experimental, computational and theoretical studies of δ′ phase coarsening in Al–Li alloys

    International Nuclear Information System (INIS)

    Pletcher, B.A.; Wang, K.G.; Glicksman, M.E.

    2012-01-01

    Experimental characterization of microstructure evolution in three binary Al–Li alloys provides critical tests of both diffusion screening theory and multiparticle diffusion simulations, which predict late-stage phase-coarsening kinetics. Particle size distributions, growth kinetics and maximum particle sizes obtained using quantitative, centered dark-field transmission electron microscopy are compared quantitatively with theoretical and computational predictions. We also demonstrate the dependence on δ′ precipitate volume fraction of the rate constant for coarsening and the microstructure’s maximum particle size, both of which remained undetermined for this alloy system for nearly a half century. Our experiments show quantitatively that the diffusion-screening theoretical description of phase coarsening yields reasonable kinetic predictions, and that useful simulations of microstructure evolution are obtained via multiparticle diffusion. The tested theory and simulation method will provide useful tools for future design of two-phase alloys for elevated temperature applications.

  13. Verification of a dust transport model against theoretical solutions in multidimensional advection diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z., E-mail: zhanjie.xu@kit.ed [Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany); Travis, J.R. [Ingenieurbuero DuBois-Pitzer-Travis, 63071 Offenbach (Germany); Breitung, W.; Jordan, T. [Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2010-12-15

    Potentially explosive dust aerosol mobilization in the vacuum vessel is an important safety issue of the ITER facility, especially in scenarios of loss of vacuum accidents. Therefore dust mobilization modeling is ongoing in Research Center Karlsuhe. At first the aerosol particle model in the GASFLOW computer code is introduced briefly. To verify the particle model, a series of particle diffusion problems are simulated in one-, two- and three-dimensions. In each problem a particle source is initially exposed to an advective gas flow. Then a dust cloud is formed in the down stream. To obtain the theoretical solution about the particle concentration in the dust cloud, the governing diffusion partial differential equations with an additional advection term are solved by using Green's function method. Different spatial and temporal characters about the particle sources are also considered, e.g., instantaneous or continuous sources, line, or volume sources and so forth. The GASFLOW simulation results about the particle concentrations and the corresponding Green's function solutions are compared case by case. Very good agreements are found between the theoretical solutions and the GASGLOW simulations, when the drag force between the micron-sized particles and the conveying gas flow meets the Stokes' law about resistance. This situation is corresponding to a very small Reynolds number based on the particle diameter, with a negligible inertia effect of the particles. This verification work shows that the particle model of the GASFLOW code can reproduce numerically particle transport and diffusion in a good way.

  14. Human-Computer Interaction and Sociological Insight: A Theoretical Examination and Experiment in Building Affinity in Small Groups

    Science.gov (United States)

    Oren, Michael Anthony

    2011-01-01

    The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…

  15. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    Science.gov (United States)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  16. A Computational and Theoretical Study of Conductance in Hydrogen-bonded Molecular Junctions

    Science.gov (United States)

    Wimmer, Michael

    This thesis is devoted to the theoretical and computational study of electron transport in molecular junctions where one or more hydrogen bonds are involved in the process. While electron transport through covalent bonds has been extensively studied, in recent work the focus has been shifted towards hydrogen-bonded systems due to their ubiquitous presence in biological systems and their potential in forming nano-junctions between molecular electronic devices and biological systems. This analysis allows us to significantly expand our comprehension of the experimentally observed result that the inclusion of hydrogen bonding in a molecular junction significantly impacts its transport properties, a fact that has important implications for our understanding of transport through DNA, and nano-biological interfaces in general. In part of this work I have explored the implications of quasiresonant transport in short chains of weakly-bonded molecular junctions involving hydrogen bonds. I used theoretical and computational analysis to interpret recent experiments and explain the role of Fano resonances in the transmission properties of the junction. In a different direction, I have undertaken the study of the transversal conduction through nucleotide chains that involve a variable number of different hydrogen bonds, e.g. NH˙˙˙O, OH˙˙˙O, and NH˙˙˙N, which are the three most prevalent hydrogen bonds in biological systems and organic electronics. My effort here has focused on the analysis of electronic descriptors that allow a simplified conceptual and computational understanding of transport properties. Specifically, I have expanded our previous work where the molecular polarizability was used as a conductance descriptor to include the possibility of atomic and bond partitions of the molecular polarizability. This is important because it affords an alternative molecular description of conductance that is not based on the conventional view of molecular orbitals as

  17. Quantum computers: Definition and implementations

    International Nuclear Information System (INIS)

    Perez-Delgado, Carlos A.; Kok, Pieter

    2011-01-01

    The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

  18. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.

    Science.gov (United States)

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.

  19. Theoretical model simulations for the global Thermospheric Mapping Study (TMS) periods

    Science.gov (United States)

    Rees, D.; Fuller-Rowell, T. J.

    Theoretical and semiempirical models of the solar UV/EUV and of the geomagnetic driving forces affecting the terrestrial mesosphere and thermosphere have been used to generate a series of representative numerical time-dependent and global models of the thermosphere, for the range of solar and geoamgnetic activity levels which occurred during the three Thermospheric Mapping Study periods. The simulations obtained from these numerical models are compared with observations, and with the results of semiempirical models of the thermosphere. The theoretical models provide a record of the magnitude of the major driving forces which affected the thermosphere during the study periods, and a baseline against which the actual observed structure and dynamics can be compared.

  20. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  1. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  2. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  3. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  4. Organ burdens and excretion rates of inhaled uranium - computations using ICRP model

    International Nuclear Information System (INIS)

    Abani, M.C.; Murthy, K.B.S.; Sunta, C.M.

    1988-01-01

    Uranium being a highly toxic material, proper estimation of the body burden is very important. During manufacture of uranium fuel, it is likely to enter the body by inhalation. By the body burden and excretion measurements, one should be able to assess whether the intake is within the safe limits or not. This is possible if one performs theoretical calculations and estimates the amount of uranium which builds up in the body as a function of time. Similarly theoretical estimates in case of excretion have to be made. For this purpose, a computer programme has been developed to find out organ burdens and excretion rates resulting from exposure to a radioactive nuclide. ICRP-30 lung model has been used and cases of single instantaneous inhalation of 1 ALI as well as inhalation at a steady rate of ALI/365 per day have been considered. Using this programme, results for uranium aerosols of classes D, W and Y and sizes 0.2, 1 and 5 microns are generated by ND computers in tabular as well as graphical forms. These will be useful in conjunction with body burden measurements by direct counting or excretion analysis. (author). 7 tabs., 56 figs

  5. Patients' Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test.

    Science.gov (United States)

    Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong

    2017-12-06

    Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use

  6. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  7. Theoretical chemistry in Belgium a topical collection from theoretical chemistry accounts

    CERN Document Server

    Champagne, Benoît; De Proft, Frank; Leyssens, Tom

    2014-01-01

    Readers of this volume can take a tour around the research locations in Belgium which are active in theoretical and computational chemistry. Selected researchers from Belgium present research highlights of their work. Originally published in the journal Theoretical Chemistry Accounts, these outstanding contributions are now available in a hardcover print format. This volume will be of benefit in particular to those research groups and libraries that have chosen to have only electronic access to the journal. It also provides valuable content for all researchers in theoretical chemistry.

  8. Desublimation process: verification and applications of a theoretical model

    International Nuclear Information System (INIS)

    Eby, R.S.

    1979-01-01

    A theoretical model simulating the simultaneous heat and mass transfer which takes place during the desublimation of a gas to a solid is presented. Desublimer column loading profiles to experimentally verify the model were obtained using a gamma scintillation technique. The data indicate that, if the physical parameters of the desublimed frost material are known, the model can accurately predict the desublimation phenomenon. The usefulness of the model in different engineering applications is also addressed

  9. Finite element simulation of nanoindentation tests using a macroscopic computational model

    International Nuclear Information System (INIS)

    Khelifa, Mourad; Fierro, Vanessa; Celzard, Alain

    2014-01-01

    The aim of this work was to develop a numerical procedure to simulate nanoindentation tests using a macroscopic computational model. Both theoretical and numerical aspects of the proposed methodology, based on the coupling of isotropic elasticity and anisotropic plasticity described with the quadratic criterion of Hill are presented to model this behaviour. The anisotropic plastic behaviour accounts for the mixed nonlinear hardening (isotropic and kinematic) under large plastic deformation. Nanoindentation tests were simulated to analyse the nonlinear mechanical behaviour of aluminium alloy. The predicted results of the finite element (FE) modelling are in good agreement with the experimental data, thereby confirming the accuracy level of the suggested FE method of analysis. The effects of some technological and mechanical parameters known to have an influence during the nanoindentation tests were also investigated.

  10. A theoretical model of job retention for home health care nurses.

    Science.gov (United States)

    Ellenbecker, Carol Hall

    2004-08-01

    Predicted severe nursing shortages and an increasing demand for home health care services have made the retention of experienced, qualified nursing staff a priority for health care organizations. The purpose of this paper is to describe a theoretical model of job retention for home health care nurses. The theoretical model is an integration of the findings of empirical research related to intent to stay and retention, components of Neal's theory of home health care nursing practice and findings from earlier work to develop an instrument to measure home health care nurses' job satisfaction. The theoretical model identifies antecedents to job satisfaction of home health care nurses. The antecedents are intrinsic and extrinsic job characteristics. The model also proposes that job satisfaction is directly related to retention and indirectly related to retention though intent to stay. Individual nurse characteristics are indirectly related to retention through intent to stay. The individual characteristic of tenure is indirectly related to retention through autonomy, as an intrinsic characteristic of job satisfaction, and intent to stay. The proposed model can be used to guide research that explores gaps in knowledge about intent to stay and retention among home health care nurses.

  11. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  12. Are computational models of any use to psychiatry?

    Science.gov (United States)

    Huys, Quentin J M; Moutoussis, Michael; Williams, Jonathan

    2011-08-01

    Mathematically rigorous descriptions of key hypotheses and theories are becoming more common in neuroscience and are beginning to be applied to psychiatry. In this article two fictional characters, Dr. Strong and Mr. Micawber, debate the use of such computational models (CMs) in psychiatry. We present four fundamental challenges to the use of CMs in psychiatry: (a) the applicability of mathematical approaches to core concepts in psychiatry such as subjective experiences, conflict and suffering; (b) whether psychiatry is mature enough to allow informative modelling; (c) whether theoretical techniques are powerful enough to approach psychiatric problems; and (d) the issue of communicating clinical concepts to theoreticians and vice versa. We argue that CMs have yet to influence psychiatric practice, but that they help psychiatric research in two fundamental ways: (a) to build better theories integrating psychiatry with neuroscience; and (b) to enforce explicit, global and efficient testing of hypotheses through more powerful analytical methods. CMs allow the complexity of a hypothesis to be rigorously weighed against the complexity of the data. The paper concludes with a discussion of the path ahead. It points to stumbling blocks, like the poor communication between theoretical and medical communities. But it also identifies areas in which the contributions of CMs will likely be pivotal, like an understanding of social influences in psychiatry, and of the co-morbidity structure of psychiatric diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Simple theoretical models for composite rotor blades

    Science.gov (United States)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  14. Theoretical and Computational Investigation of Periodically Focused Intense Charged-Particle Beams

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chiping [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). Plasma Science and Fusion Center

    2013-06-26

    The purpose of this report is to summarize results of theoretical and computational investigations of periodically focused intense charged-particle beams in parameter regimes relevant to the development of advanced high-brightness, high-power accelerators for high-energy physics research. The breakthroughs and highlights in our research in the period from April 1, 2010 to March 30, 2013 were: a) Theory and simulation of adiabatic thermal Child-Langmuir flow; b) Particle-in-cell simulations of adiabatic thermal beams in periodic solenoidal focusing field; c)Dynamics of charged particles in an adiabatic thermal beam equilibrium in a periodic solenoidal focusing field; d) Training of undergraduate researchers and graduate student in accelerator and beam physics. A brief introduction and summary is presented. Detailed descriptions of research results are provided in an appendix of publications at the end of the report.

  15. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  16. A theoretical model and experiments on the nonlinear dynamics of parallel plates subjected to laminar/turbulent squeeze-film forces

    International Nuclear Information System (INIS)

    Piteau, Philippe; Antunes, Jose

    2012-01-01

    Squeeze film dynamical effects are relevant in many industrial contexts, bearings and seals being the most conspicuous applications, but also in other industrial contexts, for instance when dealing with the seismic excitation of spent fuel racks. The significant nonlinearity of the squeeze-film forces which arise prevents the use of linearized flow models, and a fully nonlinear formulation must be used for adequate computational predictions. Because it can easily accommodate both laminar and turbulence flow effects, a simplified bulk-flow model based on gap-averaged Navier-Stokes equations, incorporating all relevant inertial and dissipative terms was previously developed by the authors, assuming a constant skin-friction coefficient. In this paper we develop an improved theoretical formulation, where the dependence of the friction coefficient on the local flow velocity is explicitly accounted for, such that it can be applied to laminar, turbulent and mixed flows. Numerical solutions for both the basic and improved nonlinear one-dimensional time-domain formulations are presented in the paper. Furthermore, we present and discuss the results of an extensive series of experiments performed at CEA/Saclay, which were performed on a test rig consisting on a long gravity-driven instrumented plate of rectangular shape colliding with a planar surface. Theoretical results stemming from both theoretical flow models are confronted with the experimental measurements, in order to assert the strengths and drawbacks of the simpler original model, as well as the improvements brought by the new but more involved flow formulation. (authors)

  17. Theoretical modeling of critical temperature increase in metamaterial superconductors

    Science.gov (United States)

    Smolyaninov, Igor; Smolyaninova, Vera

    Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.

  18. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  19. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  20. Natural Carbonized Sugar as a Low-Temperature Ammonia Sensor Material: Experimental, Theoretical, and Computational Studies.

    Science.gov (United States)

    Ghule, Balaji G; Shaikh, Shoyebmohamad; Ekar, Satish U; Nakate, Umesh T; Gunturu, Krishna Chaitanya; Shinde, Nanasaheb M; Naushad, Mu; Kim, Kwang Ho; O'Dwyer, Colm; Mane, Rajaram S

    2017-12-13

    Carbonized sugar (CS) has been synthesized via microwave-assisted carbonization of market-quality tabletop sugar bearing in mind the advantages of this synthesis method, such as being useful, cost-effective, and eco-friendly. The as-prepared CS has been characterized for its morphology, phase purity, type of porosity, pore-size distribution, and so on. The gas-sensing properties of CS for various oxidizing and reducing gases are demonstrated at ambient temperature, where we observe good selectivity toward liquid ammonia among other gases. The highest ammonia response (50%) of a CS-based sensor was noted at 80 °C for 100 ppm concentration. The response and recovery times of the CS sensor are 180 and 216 s, respectively. This unveiling ammonia-sensing study is explored through a plausible theoretical mechanism, which is further well-supported by computational modeling performed using density function theory. The effect of relative humidity on the CS sensor has also been studied at ambient temperature, which demonstrated that the minimum and maximum (20-100%) relative humidity values revealed 16 and 62% response, respectively.

  1. An improved UO2 thermal conductivity model in the ELESTRES computer code

    International Nuclear Information System (INIS)

    Chassie, G.G.; Tochaie, M.; Xu, Z.

    2010-01-01

    This paper describes the improved UO 2 thermal conductivity model for use in the ELESTRES (ELEment Simulation and sTRESses) computer code. The ELESTRES computer code models the thermal, mechanical and microstructural behaviour of a CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains for fuel element design and assessment. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. The thermal conductivity of UO 2 fuel is one of the key parameters that affect ELESTRES calculations. The existing ELESTRES thermal conductivity model has been assessed and improved based on a large amount of thermal conductivity data from measurements of irradiated and un-irradiated UO 2 fuel with different densities. The UO 2 thermal conductivity data cover 90% to 99% theoretical density of UO 2 , temperature up to 3027 K, and burnup up to 1224 MW·h/kg U. The improved thermal conductivity model, which is recommended for a full implementation in the ELESTRES computer code, has reduced the ELESTRES code prediction biases of temperature, fission gas release, and fuel sheath strains when compared with the available experimental data. This improved thermal conductivity model has also been checked with a test version of ELESTRES over the full ranges of fuel temperature, fuel burnup, and fuel density expected in CANDU fuel. (author)

  2. Towards a Game Theoretic View of Secure Computation

    DEFF Research Database (Denmark)

    Asharov, Gilad; Canetti, Ran; Hazay, Carmit

    2011-01-01

    We demonstrate how Game Theoretic concepts and formalism can be used to capture cryptographic notions of security. In the restricted but indicative case of two-party protocols in the face of malicious fail-stop faults, we first show how the traditional notions of secrecy and correctness of protoc......We demonstrate how Game Theoretic concepts and formalism can be used to capture cryptographic notions of security. In the restricted but indicative case of two-party protocols in the face of malicious fail-stop faults, we first show how the traditional notions of secrecy and correctness...... of protocols can be captured as properties of Nash equilibria in games for rational players. Next, we concentrate on fairness. Here we demonstrate a Game Theoretic notion and two different cryptographic notions that turn out to all be equivalent. In addition, we provide a simulation based notion that implies...

  3. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  4. THEORETICAL COMPUTATION OF A STRESS FIELD IN A CYLINDRICAL GLASS SPECIMEN

    Directory of Open Access Journals (Sweden)

    NORBERT KREČMER

    2011-03-01

    Full Text Available This work deals with the computation of the stress field generated in an infinitely high glass cylinder while cooling. The theory of structural relaxation is used in order to compute the heat capacity, the thermal expansion coefficient, and the viscosity. The relaxation of the stress components is solved in the frame of the Maxwell viscoelasticity model. The obtained results were verified by the sensitivity analysis and compared with some experimental data.

  5. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    Science.gov (United States)

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  6. Deterministic and Stochastic Study for an Infected Computer Network Model Powered by a System of Antivirus Programs

    Directory of Open Access Journals (Sweden)

    Youness El Ansari

    2017-01-01

    Full Text Available We investigate the various conditions that control the extinction and stability of a nonlinear mathematical spread model with stochastic perturbations. This model describes the spread of viruses into an infected computer network which is powered by a system of antivirus software. The system is analyzed by using the stability theory of stochastic differential equations and the computer simulations. First, we study the global stability of the virus-free equilibrium state and the virus-epidemic equilibrium state. Furthermore, we use the Itô formula and some other theoretical theorems of stochastic differential equation to discuss the extinction and the stationary distribution of our system. The analysis gives a sufficient condition for the infection to be extinct (i.e., the number of viruses tends exponentially to zero. The ergodicity of the solution and the stationary distribution can be obtained if the basic reproduction number Rp is bigger than 1, and the intensities of stochastic fluctuations are small enough. Numerical simulations are carried out to illustrate the theoretical results.

  7. Computer-based modelling and optimization in transportation

    CERN Document Server

    Rossi, Riccardo

    2014-01-01

    This volume brings together works resulting from research carried out by members of the EURO Working Group on Transportation (EWGT) and presented during meetings and workshops organized by the Group under the patronage of the Association of European Operational Research Societies in 2012 and 2013. The main targets of the EWGT include providing a forum to share research information and experience, encouraging joint research and the development of both theoretical methods and applications, and promoting cooperation among the many institutions and organizations which are leaders at national level in the field of transportation and logistics. The primary fields of interest concern operational research methods, mathematical models and computation algorithms, to solve and sustain solutions to problems mainly faced by public administrations, city authorities, public transport companies, service providers and logistic operators. Related areas of interest are: land use and transportation planning, traffic control and ...

  8. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  9. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  10. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    Science.gov (United States)

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  11. Stability and Hopf Bifurcation in a Delayed SEIRS Worm Model in Computer Network

    Directory of Open Access Journals (Sweden)

    Zizhen Zhang

    2013-01-01

    Full Text Available A delayed SEIRS epidemic model with vertical transmission in computer network is considered. Sufficient conditions for local stability of the positive equilibrium and existence of local Hopf bifurcation are obtained by analyzing distribution of the roots of the associated characteristic equation. Furthermore, the direction of the local Hopf bifurcation and the stability of the bifurcating periodic solutions are determined by using the normal form theory and center manifold theorem. Finally, a numerical example is presented to verify the theoretical analysis.

  12. Security in hybrid cloud computing

    OpenAIRE

    Koudelka, Ondřej

    2016-01-01

    This bachelor thesis deals with the area of hybrid cloud computing, specifically with its security. The major aim of the thesis is to analyze and compare the chosen hybrid cloud providers. For the minor aim this thesis compares the security challenges of hybrid cloud as opponent to other deployment models. In order to accomplish said aims, this thesis defines the terms cloud computing and hybrid cloud computing in its theoretical part. Furthermore the security challenges for cloud computing a...

  13. Machine learning a theoretical approach

    CERN Document Server

    Natarajan, Balas K

    2014-01-01

    This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation

  14. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  15. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  16. Applications of Computer Algebra Conference

    CERN Document Server

    Martínez-Moro, Edgar

    2017-01-01

    The Applications of Computer Algebra (ACA) conference covers a wide range of topics from Coding Theory to Differential Algebra to Quantam Computing, focusing on the interactions of these and other areas with the discipline of Computer Algebra. This volume provides the latest developments in the field as well as its applications in various domains, including communications, modelling, and theoretical physics. The book will appeal to researchers and professors of computer algebra, applied mathematics, and computer science, as well as to engineers and computer scientists engaged in research and development.

  17. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    International Nuclear Information System (INIS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-01-01

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability

  18. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  19. Inform: Efficient Information-Theoretic Analysis of Collective Behaviors

    Directory of Open Access Journals (Sweden)

    Douglas G. Moore

    2018-06-01

    Full Text Available The study of collective behavior has traditionally relied on a variety of different methodological tools ranging from more theoretical methods such as population or game-theoretic models to empirical ones like Monte Carlo or multi-agent simulations. An approach that is increasingly being explored is the use of information theory as a methodological framework to study the flow of information and the statistical properties of collectives of interacting agents. While a few general purpose toolkits exist, most of the existing software for information theoretic analysis of collective systems is limited in scope. We introduce Inform, an open-source framework for efficient information theoretic analysis that exploits the computational power of a C library while simplifying its use through a variety of wrappers for common higher-level scripting languages. We focus on two such wrappers here: PyInform (Python and rinform (R. Inform and its wrappers are cross-platform and general-purpose. They include classical information-theoretic measures, measures of information dynamics and information-based methods to study the statistical behavior of collective systems, and expose a lower-level API that allow users to construct measures of their own. We describe the architecture of the Inform framework, study its computational efficiency and use it to analyze three different case studies of collective behavior: biochemical information storage in regenerating planaria, nest-site selection in the ant Temnothorax rugatulus, and collective decision making in multi-agent simulations.

  20. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  1. Healing from Childhood Sexual Abuse: A Theoretical Model

    Science.gov (United States)

    Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner

    2011-01-01

    Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…

  2. Theoretical Hill-type muscle and stability: numerical model and application.

    Science.gov (United States)

    Schmitt, S; Günther, M; Rupp, T; Bayer, A; Häufle, D

    2013-01-01

    The construction of artificial muscles is one of the most challenging developments in today's biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.

  3. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    Science.gov (United States)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following

  4. Computational models to determine fluiddynamical transients due to condensation induced water hammer (CIWH)

    International Nuclear Information System (INIS)

    Swidersky, Harald; Schaffrath, Andreas; Dudlik, Andreas

    2012-01-01

    Condensation induced water hammer ('condensation hammer', CIWH) represent a dangerous phenomenon in pipings, which can endanger the pipe integrity. If they cannot be excluded, they have to be taken into account for the integrity proof of components and pipe structures. Up to now, there exists no substantiated model, which sufficiently determines loads due to CIWH. Within the framework of the research alliance CIWA, a tool for estimating the potential and the amount of pressure loads will be developed based on theoretical work and supported by experimental results. This first study discusses used computational models, results of experimental observations and gives an outlook onto future techniques. (orig.)

  5. Defining Effectiveness Using Finite Sets A Study on Computability

    DEFF Research Database (Denmark)

    Macedo, Hugo Daniel dos Santos; Haeusler, Edward H.; Garcia, Alex

    2016-01-01

    finite sets and uses category theory as its mathematical foundations. The model relies on the fact that every function between finite sets is computable, and that the finite composition of such functions is also computable. Our approach is an alternative to the traditional model-theoretical based works...... which rely on (ZFC) set theory as a mathematical foundation, and our approach is also novel when compared to the already existing works using category theory to approach computability results. Moreover, we show how to encode Turing machine computations in the model, thus concluding the model expresses...

  6. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Energy Technology Data Exchange (ETDEWEB)

    Rey, Michael; Tyuterev, Vladimir G. [Groupe de Spectrométrie Moléculaire et Atmosphérique, UMR CNRS 7331, BP 1039, F-51687, Reims Cedex 2 (France); Nikitin, Andrei V., E-mail: michael.rey@univ-reims.fr [Laboratory of Theoretical Spectroscopy, Institute of Atmospheric Optics, SB RAS, 634055 Tomsk (Russian Federation)

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high- T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of {sup 12}CH{sub 4} in the infrared range 0–13,400 cm{sup −1} up to T {sub max} = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm{sup −1} and intensity cutoff down to 10{sup −33} cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001–0.01 cm{sup −1}. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high- T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  7. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Science.gov (United States)

    Rey, Michael; Nikitin, Andrei V.; Tyuterev, Vladimir G.

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high-T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of 12CH4 in the infrared range 0-13,400 cm-1 up to T max = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm-1 and intensity cutoff down to 10-33 cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001-0.01 cm-1. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high-T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  8. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  9. Organizational Resilience: The Theoretical Model and Research Implication

    Directory of Open Access Journals (Sweden)

    Xiao Lei

    2017-01-01

    Full Text Available Organizations are all subject to a diverse and ever changing and uncertain environment. Under this situation organizations should develop a capability which can resist the emergency and recover from the disruption. Base on lot of literature, the paper provides the main concept of organizational resilience; construct the primary theoretical model and some implications for management.

  10. Towards a theoretical model on medicines as a health need.

    Science.gov (United States)

    Vargas-Peláez, Claudia Marcela; Soares, Luciano; Rover, Marina Raijche Mattozo; Blatt, Carine Raquel; Mantel-Teeuwisse, Aukje; Rossi Buenaventura, Francisco Augusto; Restrepo, Luis Guillermo; Latorre, María Cristina; López, José Julián; Bürgin, María Teresa; Silva, Consuelo; Leite, Silvana Nair; Mareni Rocha, Farias

    2017-04-01

    Medicines are considered one of the main tools of western medicine to resolve health problems. Currently, medicines represent an important share of the countries' healthcare budget. In the Latin America region, access to essential medicines is still a challenge, although countries have established some measures in the last years in order to guarantee equitable access to medicines. A theoretical model is proposed for analysing the social, political, and economic factors that modulate the role of medicines as a health need and their influence on the accessibility and access to medicines. The model was built based on a narrative review about health needs, and followed the conceptual modelling methodology for theory-building. The theoretical model considers elements (stakeholders, policies) that modulate the perception towards medicines as a health need from two perspectives - health and market - at three levels: international, national and local levels. The perception towards medicines as a health need is described according to Bradshaw's categories: felt need, normative need, comparative need and expressed need. When those different categories applied to medicines coincide, the patients get access to the medicines they perceive as a need, but when the categories do not coincide, barriers to access to medicines are created. Our theoretical model, which holds a broader view about the access to medicines, emphasises how power structures, interests, interdependencies, values and principles of the stakeholders could influence the perception towards medicines as a health need and the access to medicines in Latin American countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  12. An Algebra-Based Introductory Computational Neuroscience Course with Lab.

    Science.gov (United States)

    Fink, Christian G

    2017-01-01

    A course in computational neuroscience has been developed at Ohio Wesleyan University which requires no previous experience with calculus or computer programming, and which exposes students to theoretical models of neural information processing and techniques for analyzing neural data. The exploration of theoretical models of neural processes is conducted in the classroom portion of the course, while data analysis techniques are covered in lab. Students learn to program in MATLAB and are offered the opportunity to conclude the course with a final project in which they explore a topic of their choice within computational neuroscience. Results from a questionnaire administered at the beginning and end of the course indicate significant gains in student facility with core concepts in computational neuroscience, as well as with analysis techniques applied to neural data.

  13. Dynamics of a Computer Virus Propagation Model with Delays and Graded Infection Rate

    Directory of Open Access Journals (Sweden)

    Zizhen Zhang

    2017-01-01

    Full Text Available A four-compartment computer virus propagation model with two delays and graded infection rate is investigated in this paper. The critical values where a Hopf bifurcation occurs are obtained by analyzing the distribution of eigenvalues of the corresponding characteristic equation. In succession, direction and stability of the Hopf bifurcation when the two delays are not equal are determined by using normal form theory and center manifold theorem. Finally, some numerical simulations are also carried out to justify the obtained theoretical results.

  14. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    Science.gov (United States)

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present

  15. A Game Theoretic Model of Thermonuclear Cyberwar

    Energy Technology Data Exchange (ETDEWEB)

    Soper, Braden C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-23

    In this paper we propose a formal game theoretic model of thermonuclear cyberwar based on ideas found in [1] and [2]. Our intention is that such a game will act as a first step toward building more complete formal models of Cross-Domain Deterrence (CDD). We believe the proposed thermonuclear cyberwar game is an ideal place to start on such an endeavor because the game can be fashioned in a way that is closely related to the classical models of nuclear deterrence [4–6], but with obvious modifications that will help to elucidate the complexities introduced by a second domain. We start with the classical bimatrix nuclear deterrence game based on the game of chicken, but introduce uncertainty via a left-of-launch cyber capability that one or both players may possess.

  16. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  17. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  18. A theoretical model for predicting neutron fluxes for cyclic Neutron ...

    African Journals Online (AJOL)

    A theoretical model has been developed for prediction of thermal neutron fluxes required for cyclic irradiations of a sample to obtain the same activity previously used for the detection of any radionuclide of interest. The model is suitable for radiotracer production or for long-lived neutron activation products where the ...

  19. Benchmarking of Computational Models for NDE and SHM of Composites

    Science.gov (United States)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  20. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  1. A theoretical model for the control of an enforcement system on emissions of pollutants

    International Nuclear Information System (INIS)

    Villegas, Clara Ines

    2005-01-01

    A theoretical proposal for the development of an enforcement strategy is presented on this paper. The proposal guaranties full compliance of an emission charge system with self-report presence. The proposed models are static, and mostly based on those proposed by Strandlund and Chavez (2000) for a transferable permits system with self -report presence. Theoretical models were developed for three possible violations: self-report violation, maximum emission limits violation and payment violation. Based in theoretical results, a simulation was implemented with hypothetical data: 20 regulated firms with different marginal abatement cost functions. The variation in charge amount, Monitory costs, abatement cost, self-report value and total cost are analyzed, with each of the theoretical models under different scenarios. Our results show that the behavior of the different variables remains unchanged under the three static models, and that the only variations occur inside the scenarios. Our results can serve as a tool for the formulation and design of taxing systems

  2. Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials

    International Nuclear Information System (INIS)

    Bansil, Arun

    2016-01-01

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of X-ray science. In particular, our Collaborative Research Team (CRT) focused on understanding and modeling of elastic and inelastic resonant X-ray scattering processes. We worked to unify the three different computational approaches currently used for modeling X-ray scattering-density functional theory, dynamical mean-field theory, and small-cluster exact diagonalization-to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter. To achieve a convergence in the interpretation and to maximize complementary aspects of different theoretical methods, we concentrated on the cuprates, where most experiments have been performed. Our team included both US and international researchers, and it fostered new collaborations between researchers currently working with different approaches. In addition, we developed close relationships with experimental groups working in the area at various synchrotron facilities in the US. Our CRT thus helped toward enabling the US to assume a leadership role in the theoretical development of the field, and to create a global network and community of scholars dedicated to X-ray scattering research.

  3. Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bansil, Arun [Northeastern Univ., Boston, MA (United States)

    2016-12-01

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of X-ray science. In particular, our Collaborative Research Team (CRT) focused on understanding and modeling of elastic and inelastic resonant X-ray scattering processes. We worked to unify the three different computational approaches currently used for modeling X-ray scattering—density functional theory, dynamical mean-field theory, and small-cluster exact diagonalization—to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter. To achieve a convergence in the interpretation and to maximize complementary aspects of different theoretical methods, we concentrated on the cuprates, where most experiments have been performed. Our team included both US and international researchers, and it fostered new collaborations between researchers currently working with different approaches. In addition, we developed close relationships with experimental groups working in the area at various synchrotron facilities in the US. Our CRT thus helped toward enabling the US to assume a leadership role in the theoretical development of the field, and to create a global network and community of scholars dedicated to X-ray scattering research.

  4. Theoretical models for supercritical fluid extraction.

    Science.gov (United States)

    Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan

    2012-08-10

    For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Computational Science in Armenia (Invited Talk)

    Science.gov (United States)

    Marandjian, H.; Shoukourian, Yu.

    This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.

  6. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  7. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  8. A theoretical model of water and trade

    Science.gov (United States)

    Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie

    2016-03-01

    Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.

  9. PREFACE: Euro-TMCS I: Theory, Modelling and Computational Methods for Semiconductors

    Science.gov (United States)

    Gómez-Campos, F. M.; Rodríguez-Bolívar, S.; Tomić, S.

    2015-05-01

    The present issue contains a selection of the best contributed works presented at the first Euro-TMCS conference (Theory, Modelling and Computational Methods for Semiconductors, European Session). The conference was held at Faculty of Sciences, Universidad de Granada, Spain on 28st-30st January 2015. This conference is the first European edition of the TMCS conference series which started in 2008 at the University of Manchester and has always been held in the United Kingdom. Four previous conferences have been previously carried out (Manchester 2008, York 2010, Leeds 2012 and Salford 2014). Euro-TMCS is run for three days; the first one devoted to giving invited tutorials, aimed particularly at students, on recent development of theoretical methods. On this occasion the session was focused on the presentation of widely-used computational methods for the modelling of physical processes in semiconductor materials. Freely available simulation software (SIESTA, Quantum Espresso and Yambo) as well as commercial software (TiberCad and MedeA) were presented in the conference by members of their development team, offering to the audience an overview of their capabilities for research. The second part of the conference showcased prestigious invited and contributed oral presentations, alongside poster sessions, in which direct discussion with authors was promoted. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in semiconductor science and technology. Theoretical approaches represented in this meeting included: Density Functional Theory, Semi-empirical Electronic Structure Methods, Multi-scale Approaches, Modelling of PV devices, Electron Transport, and Graphene. Topics included, but were not limited to: Optical Properties of Quantum Nanostructures including Colloids and Nanotubes, Plasmonics, Magnetic Semiconductors, Photonic Structures, and Electronic Devices. The Editors Acknowledgments: We would like to thank all

  10. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  11. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    Science.gov (United States)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  12. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  13. A game theoretic investigation of deception in network security

    Energy Technology Data Exchange (ETDEWEB)

    Carroll, Thomas E.; Grosu, Daniel

    2010-12-03

    We perform a game theoretic investigation of the effects of deception on the interactions between an attacker and a defender of a computer network. The defender can employ camouflage by either disguising a normal system as a honeypot or by disguising a honeypot as a normal system. We model the interactions between defender and attacker using a signaling game, a non-cooperative two player dynamic game of incomplete information. For this model, we determine which strategies admit perfect Bayesian equilibria. These equilibria are refined Nash equilibria in which neither the defender nor the attacker will unilaterally choose to deviate from their strategies. Finally, we discuss the benefits of employing deceptive equilibrium strategies in the defense of a computer network.

  14. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  15. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  16. Theoretical Hill-Type Muscle and Stability: Numerical Model and Application

    Directory of Open Access Journals (Sweden)

    S. Schmitt

    2013-01-01

    Full Text Available The construction of artificial muscles is one of the most challenging developments in today’s biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.

  17. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    Science.gov (United States)

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  19. Theoretical analysis of Sloshing effect on Pitch Angel to optimize quick dive on litoral submarine 22 M

    Science.gov (United States)

    Sinaga, L. T. P.

    2016-11-01

    This study considers the analytic theoretical model. The Submarine was considered to be rigid body are free sailing model with various angle of attack to be quick dive as pitching motion. By using Floating Body Mechanism supported by analytic model to describe the theoretical model analisys test. For the case of fluid level on 30% of the front balast tank and various angle of pitch. The paper describes a study on Analytic theoretical and modeling in CFD (Computational Fluid Dynamics). For Analyzing at special care of sloshing on free surce ballast tank after peak and fore peak were taken into consideration. In general, both methods (analytic model and CFD model) demonstrated such a good agreement, particularly in the consistent trend of RAO.

  20. Computational principles of syntax in the regions specialized for language: integrating theoretical linguistics and functional neuroimaging.

    Science.gov (United States)

    Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L

    2013-01-01

    The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.

  1. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL; Sheldon, Frederick T. [University of Idaho

    2015-01-01

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.

  2. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  3. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  4. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  5. Theoretical Modeling of Rock Breakage by Hydraulic and Mechanical Tool

    Directory of Open Access Journals (Sweden)

    Hongxiang Jiang

    2014-01-01

    Full Text Available Rock breakage by coupled mechanical and hydraulic action has been developed over the past several decades, but theoretical study on rock fragmentation by mechanical tool with water pressure assistance was still lacking. The theoretical model of rock breakage by mechanical tool was developed based on the rock fracture mechanics and the solution of Boussinesq’s problem, and it could explain the process of rock fragmentation as well as predicating the peak reacting force. The theoretical model of rock breakage by coupled mechanical and hydraulic action was developed according to the superposition principle of intensity factors at the crack tip, and the reacting force of mechanical tool assisted by hydraulic action could be reduced obviously if the crack with a critical length could be produced by mechanical or hydraulic impact. The experimental results indicated that the peak reacting force could be reduced about 15% assisted by medium water pressure, and quick reduction of reacting force after peak value decreased the specific energy consumption of rock fragmentation by mechanical tool. The crack formation by mechanical or hydraulic impact was the prerequisite to improvement of the ability of combined breakage.

  6. A game theoretic model of the Northwestern European electricity market-market power and the environment

    International Nuclear Information System (INIS)

    Lise, Wietze; Linderhof, Vincent; Kuik, Onno; Kemfert, Claudia; Ostling, Robert; Heinzow, Thomas

    2006-01-01

    This paper develops a static computational game theoretic model. Illustrative results for the liberalising European electricity market are given to demonstrate the type of economic and environmental results that can be generated with the model. The model is empirically calibrated to eight Northwestern European countries, namely Belgium, Denmark, Finland, France, Germany, The Netherlands, Norway, and Sweden. Different market structures are compared, depending on the ability of firms to exercise market power, ranging from perfect competition without market power to strategic competition where large firms exercise market power. In addition, a market power reduction policy is studied where the near-monopolies in France and Belgium are demerged into smaller firms. To analyse environmental impacts, a fixed greenhouse gas emission reduction target is introduced under different market structures. The results indicate that the effects of liberalisation depend on the resulting market structure, but that a reduction in market power of large producers may be beneficial for both the consumer (i.e. lower prices) and the environment (i.e. lower greenhouse gas permit price and lower acidifying and smog emissions)

  7. A Theoretical Bayesian Game Model for the Vendor-Retailer Relation

    Directory of Open Access Journals (Sweden)

    Emil CRIŞAN

    2012-06-01

    Full Text Available We consider an equilibrated supply chain with two equal partners, a vendor and a retailer (also called newsboy type products supply chain. The actions of each partner are driven by profit. Given the fact that at supply chain level are specific external influences which affect the costs and concordant the profit, we use a game theoretic model for the situation, considering costs and demand. At theoretical level, symmetric and asymmetric information patterns are considered for this situation. There are at every supply chain’s level situations when external factors (such as inflation, raw-material rate influence the situation of each partner even if the information is well shared within the chain. The model we propose considers both the external factors and asymmetric information within a supply chain.

  8. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  9. Molecular hyperpolarizabilities of push–pull chromophores: A comparison between theoretical and experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Capobianco, A. [Dipartimento di Fisica E.R. Caianiello, Università di Salerno, via ponte don Melillo, I-84084 Fisciano (Italy); Centore, R. [Dipartimento di Chimica P. Corradini, Università di Napoli, via Cintia, I-80126 Napoli (Italy); Noce, C. [Dipartimento di Fisica E.R. Caianiello, Università di Salerno, via ponte don Melillo, I-84084 Fisciano (Italy); Peluso, A., E-mail: apeluso@unisa.it [Dipartimento di Chimica e Biologia, Università di Salerno, via ponte don Melillo, I-84084 Fisciano (Italy)

    2013-01-16

    Highlights: ► Electro-optical determined and MP2/DFT computed NLO properties have been compared. ► Significant dependence of dipole moments of elongated NLO chromophores on conformations has been found. ► A thorough comparison between MP2 and DFT/TD-DFT computational approaches has been carried out. ► The two-state model overestimates hyperpolarizability. - Abstract: Electric dipole moments and static first order hyperpolarizabilities of two push–pull molecules with an extended π electron systems have been evaluated at different computational levels and compared with the results of electro-optical absorption measurements, based on the two state model. Calculations show that: (i) the dipole moments of such elongated systems depend significantly on conformation, a thorough conformational search is necessary for a meaningful comparison between theoretical and experimental results; (ii) DFT methods, in particular CAM-B3LYP and M05-2X, yield dipole moments which compare well with those obtained by post Hartree–Fock methods (MP2) and by EOA measurements; (iii) theoretical first order hyperpolarizabilities are largely underestimated, both by MP2 and DFT methods, possibly because of the failure of two state model used in electro-optical measurements.

  10. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  11. Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere

    Science.gov (United States)

    Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.

    2017-11-01

    Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.

  12. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  13. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  14. Computational models to determine fluid dynamical transients due to condensation induced water hammer (CIWH)

    International Nuclear Information System (INIS)

    Swidersky, H.; Schaffrath, A.; Dudlik, A.

    2011-01-01

    Condensation induced water hammer (CIWH) represent a dangerous phenomenon in pipings, which can endanger the pipe integrity. If they cannot be excluded, they have to be taken into account for the integrity proof of components and pipe structures. Up to now, there exists no substantiated model, which sufficiently determines loads due to CIWH. Within the framework of the research alliance CIWA, a tool for estimating the potential and the amount of pressure loads will be developed based on theoretical work and supported by experimental results. This first study discusses used computational models, compares their results against experimental observations and gives an outlook onto future techniques. (author)

  15. Tesla coil theoretical model and experimental verification

    OpenAIRE

    Voitkans, Janis; Voitkans, Arnis

    2014-01-01

    Abstract – In this paper a theoretical model of a Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wired format, where the line voltage is measured against electrically neutral space. It is shown that equivalent two-wired scheme can be found for a single-wired scheme and already known long line theory can be applied to a Tesla coil. Formulas for calculation of voltage in a Tesla coil by coordinate and calculation of resonance fre...

  16. Theoretical Atomic Physics code development II: ACE: Another collisional excitation code

    International Nuclear Information System (INIS)

    Clark, R.E.H.; Abdallah, J. Jr.; Csanak, G.; Mann, J.B.; Cowan, R.D.

    1988-12-01

    A new computer code for calculating collisional excitation data (collision strengths or cross sections) using a variety of models is described. The code uses data generated by the Cowan Atomic Structure code or CATS for the atomic structure. Collisional data are placed on a random access file and can be displayed in a variety of formats using the Theoretical Atomic Physics Code or TAPS. All of these codes are part of the Theoretical Atomic Physics code development effort at Los Alamos. 15 refs., 10 figs., 1 tab

  17. Experimental control of Stewart's theoretical model of large amplitude moving striations

    International Nuclear Information System (INIS)

    Berge, G. van den; Vanmarcke, M.

    1977-01-01

    The longitudinal variation of the electron concentration in large amplitude moving striations, computed theoretically by Stewart, has been tested experimentally. The measurements are carried out by means of a sampling probe technique in the glow discharge of neon (I = 105 mA, 2R = 5.6 cm, p 0 = 0.79 torr) and of argon (I = 75 mA, 2R = 5.7 cm, p 0 = 0.46 torr). It is found that the measured dependence of the concentration is not consistent with the theory. (Auth.)

  18. Theoretical Calculations of Atomic Data for Spectroscopy

    Science.gov (United States)

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  19. Computational study of nonlinear plasma waves. I. Simulation model and monochromatic wave propagation

    International Nuclear Information System (INIS)

    Matsuda, Y.; Crawford, F.W.

    1975-01-01

    An economical low-noise plasma simulation model originated by Denavit is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation. These tests serve to establish the low-noise features of the model, and to verify the theoretical linear dispersion relation at wave energy levels as low as 10 -6 of the plasma thermal energy: Better quantitative results are obtained, for comparable computing time, than can be obtained by conventional particle simulation models, or direct solution of the Vlasov equation. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories

  20. Theoretical Model for the Performance of Liquid Ring Pump Based on the Actual Operating Cycle

    Directory of Open Access Journals (Sweden)

    Si Huang

    2017-01-01

    Full Text Available Liquid ring pump is widely applied in many industry fields due to the advantages of isothermal compression process, simple structure, and liquid-sealing. Based on the actual operating cycle of “suction-compression-discharge-expansion,” a universal theoretical model for performance of liquid ring pump was established in this study, to solve the problem that the theoretical models deviated from the actual performance in operating cycle. With the major geometric parameters and operating conditions of a liquid ring pump, the performance parameters such as the actual capacity for suction and discharge, shaft power, and global efficiency can be conveniently predicted by the proposed theoretical model, without the limitation of empiric range, performance data, or the detailed 3D geometry of pumps. The proposed theoretical model was verified by experimental performances of liquid ring pumps and could provide a feasible tool for the application of liquid ring pump.

  1. Research on application of intelligent computation based LUCC model in urbanization process

    Science.gov (United States)

    Chen, Zemin

    2007-06-01

    Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents

  2. Decision support models for solid waste management: Review and game-theoretic approaches

    International Nuclear Information System (INIS)

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2013-01-01

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed

  3. Decision support models for solid waste management: Review and game-theoretic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)

    2013-05-15

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

  4. Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective

    Science.gov (United States)

    Peterson, Shari L.

    2004-01-01

    This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…

  5. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  6. Measuring and Managing Value Co-Creation Process: Overview of Existing Theoretical Models

    Directory of Open Access Journals (Sweden)

    Monika Skaržauskaitė

    2013-08-01

    Full Text Available Purpose — the article is to provide a holistic view on concept of value co-creation and existing models for measuring and managing it by conducting theoretical analysis of scientific literature sources targeting the integration of various approaches. Most important and relevant results of the literature study are presented with a focus on changed roles of organizations and consumers. This article aims at contributing theoretically to the research stream of measuring co-creation of value in order to gain knowledge for improvement of organizational performance and enabling new and innovative means of value creation. Design/methodology/approach. The nature of this research is exploratory – theoretical analysis and synthesis of scientific literature sources targeting the integration of various approaches was performed. This approach was chosen due to the absence of established theory on models of co-creation, possible uses in organizations and systematic overview of tools measuring/suggesting how to measure co-creation. Findings. While the principles of managing and measuring co-creation in regards of consumer motivation and involvement are widely researched, little attempt has been made to identify critical factors and create models dealing with organizational capabilities and managerial implications of value co-creation. Systematic analysis of literature revealed a gap not only in empirical research concerning organization’s role in co-creation process, but in theoretical and conceptual levels, too. Research limitations/implications. The limitations of this work as a literature review lies in its nature – the complete reliance on previously published research papers and the availability of these studies. For a deeper understanding of co-creation management and for developing models that can be used in real-life organizations, a broader theoretical, as well as empirical, research is necessary. Practical implications. Analysis of the

  7. Bridging computational approaches to speech production: The semantic–lexical–auditory–motor model (SLAM)

    Science.gov (United States)

    Hickok, Gregory

    2017-01-01

    Speech production is studied from both psycholinguistic and motor-control perspectives, with little interaction between the approaches. We assessed the explanatory value of integrating psycholinguistic and motor-control concepts for theories of speech production. By augmenting a popular psycholinguistic model of lexical retrieval with a motor-control-inspired architecture, we created a new computational model to explain speech errors in the context of aphasia. Comparing the model fits to picture-naming data from 255 aphasic patients, we found that our new model improves fits for a theoretically predictable subtype of aphasia: conduction. We discovered that the improved fits for this group were a result of strong auditory-lexical feedback activation, combined with weaker auditory-motor feedforward activation, leading to increased competition from phonologically related neighbors during lexical selection. We discuss the implications of our findings with respect to other extant models of lexical retrieval. PMID:26223468

  8. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  9. Theoretical and experimental studies on critical heat flux in subcooled boiling and vertical flow geometry

    International Nuclear Information System (INIS)

    Staron, E.

    1996-01-01

    Critical Heat Flux is a very important subject of interest due to design, operation and safety analysis of nuclear power plants. Every new design of the core must be thoroughly checked. Experimental studies have been performed using freon as a working fluid. The possibility of transferring of results into water equivalents has been proved. The experimental study covers vertical flow, annular geometry over a wide range of pressure, mass flow and temperature at inlet of test section. Theoretical models of Critical Heat Flux have been presented but only those which cover DNB. Computer programs allowing for numerical calculations using theoretical models have been developed. A validation of the theoretical models has been performed in accordance with experimental results. (author). 83 refs, 32 figs, 4 tabs

  10. The demand-induced strain compensation model : renewed theoretical considerations and empirical evidence

    NARCIS (Netherlands)

    de Jonge, J.; Dormann, C.; van den Tooren, M.; Näswall, K.; Hellgren, J.; Sverke, M.

    2008-01-01

    This chapter presents a recently developed theoretical model on jobrelated stress and performance, the so-called Demand-Induced Strain Compensation (DISC) model. The DISC model predicts in general that adverse health effects of high job demands can best be compensated for by matching job resources

  11. Nursing management of sensory overload in psychiatry – Theoretical densification and modification of the framework model

    Science.gov (United States)

    Scheydt, Stefan; Needham, Ian; Behrens, Johann

    2017-01-01

    Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.

  12. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  13. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  14. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  15. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  16. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  17. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  18. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Anticipatory Cognitive Systems: a Theoretical Model

    Science.gov (United States)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  20. The theoretical aspects of UrQMD & AMPT models

    Energy Technology Data Exchange (ETDEWEB)

    Saini, Abhilasha, E-mail: kashvini.abhi@gmail.com [Research Scholar, Department of Physics, Suresh Gyan vihar University, Jaipur (India); Bhardwaj, Sudhir, E-mail: sudhir.hep@gmail.com [Assistant professor, Govt. College of Engineering & Technology, Bikaner (India)

    2016-05-06

    The field of high energy physics is very challenging in carrying out theories and experiments to unlock the secrets of heavy ion collisions and still not cracked and solved completely. There are many theoretical queries; some may be due to the inherent causes like the non-perturbative nature of QCD in the strong coupling limit, also due to the multi-particle production and evolution during the heavy ion collisions which increase the complexity of the phenomena. So for the purpose of understanding the phenomena, variety of theories and ideas are developed which are usually implied in the form of Monte-Carlo codes. The UrQMD model and the AMPT model are discussed here in detail. These methods are useful in modeling the nuclear collisions.

  1. Modeling goals and functions of control and safety systems - theoretical foundations and extensions of MFM

    International Nuclear Information System (INIS)

    Lind, M.

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  2. Modeling goals and functions of control and safety systems -theoretical foundations and extensions of MFM

    Energy Technology Data Exchange (ETDEWEB)

    Lind, M. [Oersted - DTU, Kgs. Lyngby (Denmark)

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  3. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    Science.gov (United States)

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  4. Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2008-06-01

    Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.

  5. Measurements and computer modeling of fast ion emission from plasma accelerators of the rod plasma injector type

    International Nuclear Information System (INIS)

    Malinowski, Karol; Sadowski, Marek J; Skladnik-Sadowska, Elzbieta

    2014-01-01

    This paper reports on the results of experimental studies and computer simulations of the emission of fast ion streams from so-called rod plasma injectors (RPI). Various RPI facilities have been used at the National Centre for Nuclear Research (NCBJ) for basic plasma studies as well as for material engineering. In fact, the RPI facilities have been studied experimentally for many years, particularly at the Institute for Nuclear Sciences (now the NCBJ), and numerous experimental data have been collected. Unfortunately, the ion emission characteristics have so far not been explained theoretically in a satisfactory way. In this paper, in order to explain these characteristics, use was made of a single-particle model. Taking into account the stochastic character of the ion emission, we applied a Monte Carlo method. The performed computer simulations of a pinhole image and energy spectrum of deuterons emitted from RPI-IBIS, which were computed on the basis of the applied model, appeared to be in reasonable agreement with the experimental data. (paper)

  6. The neural mediators of kindness-based meditation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Jennifer Streiffer Mascaro

    2015-02-01

    Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  7. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon

    International Nuclear Information System (INIS)

    Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon

    2014-01-01

    A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height

  8. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height.

  9. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  10. Transport at basin scales: 1. Theoretical framework

    Directory of Open Access Journals (Sweden)

    A. Rinaldo

    2006-01-01

    Full Text Available The paper describes the theoretical framework for a class of general continuous models of the hydrologic response including both flow and transport of reactive solutes. The approach orders theoretical results appeared in disparate fields into a coherent theoretical framework for both hydrologic flow and transport. In this paper we focus on the Lagrangian description of the carrier hydrologic runoff and of the processes embedding catchment-scale generation and transport of matter carried by runoff. The former defines travel time distributions, while the latter defines lifetime distributions, here thought of as contact times between mobile and immobile phases. Contact times are assumed to control mass transfer in a well-mixed approximation, appropriate in cases, like in basin-scale transport phenomena, where the characteristic size of the injection areas is much larger than that of heterogeneous features. As a result, we define general mass-response functions of catchments which extend to transport of matter geomorphologic theories of the hydrologic response. A set of examples is provided to clarify the theoretical results towards a computational framework for generalized applications, described in a companion paper.

  11. Aperture Array Photonic Metamaterials: Theoretical approaches, numerical techniques and a novel application

    Science.gov (United States)

    Lansey, Eli

    Optical or photonic metamaterials that operate in the infrared and visible frequency regimes show tremendous promise for solving problems in renewable energy, infrared imaging, and telecommunications. However, many of the theoretical and simulation techniques used at lower frequencies are not applicable to this higher-frequency regime. Furthermore, technological and financial limitations of photonic metamaterial fabrication increases the importance of reliable theoretical models and computational techniques for predicting the optical response of photonic metamaterials. This thesis focuses on aperture array metamaterials. That is, a rectangular, circular, or other shaped cavity or hole embedded in, or penetrating through a metal film. The research in the first portion of this dissertation reflects our interest in developing a fundamental, theoretical understanding of the behavior of light's interaction with these aperture arrays, specifically regarding enhanced optical transmission. We develop an approximate boundary condition for metals at optical frequencies, and a comprehensive, analytical explanation of the physics underlying this effect. These theoretical analyses are augmented by computational techniques in the second portion of this thesis, used both for verification of the theoretical work, and solving more complicated structures. Finally, the last portion of this thesis discusses the results from designing, fabricating and characterizing a light-splitting metamaterial.

  12. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  13. Vehicle - Bridge interaction, comparison of two computing models

    Science.gov (United States)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  14. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.; Schreyer, H.L. [New Mexico Engineering Research Institute, Albuquerque, NM (United States)

    1995-09-01

    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  15. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  16. Theoretical and computational studies of disorder-induced scattering and nonlinear optical interactions in slow-light photonic crystal waveguides

    Science.gov (United States)

    Mann, Nishan Singh

    Photonic crystal waveguides (PCWs) are nano-scale devices offering an exciting platform for exploring and exploiting enhanced linear and nonlinear light-matter interactions, aided in-part by slowing down the group velocity (vg) of on-chip photons. However, with potential applications in telecommunications, bio-sensing and quantum computing, the road to commercialization and practical devices is hindered by our limited understanding of the influence of structural disorder on linear and nonlinear light propagation. This thesis refines and develops state-of-the-art mathematical and numerical models for understanding the important role of disorder-related optical phenomena for PCWs in the linear and optical nonlinear regime. The importance of Bloch modes is demonstrated by computing the power loss caused by disorder-induced scattering for various dispersion engineered PCWs. The theoretical results are found to be in very good agreement with related experiments and it is shown how dispersion engineered designs can minimize the Bloch fields around spatial imperfections resulting in a radical departure from the usual assumed scaling vg. -2 of backscatteringlosses. We also conduct a systematic investigation of the influence of intra-hole correlation length, a parameter characterizing disorder on backscattering losses and find the loss behaviour to be qualitatively dependent on waveguide design and frequency. We then model disorder-induced resonance shifts to compute the ensemble averaged disordered density of states, accounting for important local field effects which are crucial in achieving good qualitative agreement with experiments. Lastly, motivated by emerging experiments examining enhanced nonlinear interactions, we develop an intuitive time dependent coupled mode formalism to derive propagation equations describing nonlinear pulse propagation in the presence of disorder-induced multiple scattering. The framework establishes a natural length scale for each physical

  17. Theoretical modeling of yields for proton-induced reactions on natural and enriched molybdenum targets

    Energy Technology Data Exchange (ETDEWEB)

    Celler, A; Hou, X [University of British Columbia, Vancouver, BC, Canada, (Canada); Benard, F; Ruth, T, E-mail: aceller@physics.ubc.ca, E-mail: xinchi@phas.ubc.ca, E-mail: fbenard@bccrc.ca, E-mail: truth@triumf.ca [BC Cancer Agency, Vancouver, BC (Canada)

    2011-09-07

    Recent acute shortage of medical radioisotopes prompted investigations into alternative methods of production and the use of a cyclotron and {sup 100}Mo(p,2n){sup 99m}Tc reaction has been considered. In this context, the production yields of {sup 99m}Tc and various other radioactive and stable isotopes which will be created in the process have to be investigated, as these may affect the diagnostic outcome and radiation dosimetry in human studies. Reaction conditions (beam and target characteristics, and irradiation and cooling times) need to be optimized in order to maximize the amount of {sup 99m}Tc and minimize impurities. Although ultimately careful experimental verification of these conditions must be performed, theoretical calculations can provide the initial guidance allowing for extensive investigations at little cost. We report the results of theoretically determined reaction yields for {sup 99m}Tc and other radioactive isotopes created when natural and enriched molybdenum targets are irradiated by protons. The cross-section calculations were performed using a computer program EMPIRE for the proton energy range 6-30 MeV. A computer graphical user interface for automatic calculation of production yields taking into account various reaction channels leading to the same final product has been created. The proposed approach allows us to theoretically estimate the amount of {sup 99m}Tc and its ratio relative to {sup 99g}Tc and other radioisotopes which must be considered reaction contaminants, potentially contributing to additional patient dose in diagnostic studies.

  18. Theoretical modeling of yields for proton-induced reactions on natural and enriched molybdenum targets.

    Science.gov (United States)

    Celler, A; Hou, X; Bénard, F; Ruth, T

    2011-09-07

    Recent acute shortage of medical radioisotopes prompted investigations into alternative methods of production and the use of a cyclotron and ¹⁰⁰Mo(p,2n)(99m)Tc reaction has been considered. In this context, the production yields of (99m)Tc and various other radioactive and stable isotopes which will be created in the process have to be investigated, as these may affect the diagnostic outcome and radiation dosimetry in human studies. Reaction conditions (beam and target characteristics, and irradiation and cooling times) need to be optimized in order to maximize the amount of (99m)Tc and minimize impurities. Although ultimately careful experimental verification of these conditions must be performed, theoretical calculations can provide the initial guidance allowing for extensive investigations at little cost. We report the results of theoretically determined reaction yields for (99m)Tc and other radioactive isotopes created when natural and enriched molybdenum targets are irradiated by protons. The cross-section calculations were performed using a computer program EMPIRE for the proton energy range 6-30 MeV. A computer graphical user interface for automatic calculation of production yields taking into account various reaction channels leading to the same final product has been created. The proposed approach allows us to theoretically estimate the amount of (99m)Tc and its ratio relative to (99g)Tc and other radioisotopes which must be considered reaction contaminants, potentially contributing to additional patient dose in diagnostic studies.

  19. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  20. Computational and Simulation Modeling of Political Attitudes: The 'Tiger' Area of Political Culture Research

    Directory of Open Access Journals (Sweden)

    Voinea, Camelia Florela

    2016-01-01

    Full Text Available In almost one century long history, political attitudes modeling research has accumulated a critical mass of theory and method. Its characteristics and particularities have often suggested that political attitude approach to political persuasion modeling reveals a strong theoretical autonomy of concept which entitles it to become a new separate discipline of research. Though this did not actually happen, political attitudes modeling research has remained the most challenging area – the “tiger” – of political culture modeling research. This paper reviews the research literature on the conceptual, computational and simulation modeling of political attitudes developed starting with the beginning of the 20th century until the present times. Several computational and simulation modeling paradigms have provided support to political attitudes modeling research. These paradigms and the shift from one to another are briefly presented for a period of time of almost one century. The dominant paradigmatic views are those inspired by the Newtonian mechanics, and those based on the principle of methodological individualism and the emergence of macro phenomena from the individual interactions at the micro level of a society. This period of time is divided in eight ages covering the history of ideas in a wide range of political domains, going from political attitudes to polity modeling. Internal and external pressures for paradigmatic change are briefly explained.

  1. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children

    Science.gov (United States)

    Courtright, John A.; Courtright, Illene C.

    1976-01-01

    A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)

  2. Creation of 'Ukrytie' objects computer model

    International Nuclear Information System (INIS)

    Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.

    1999-01-01

    A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model

  3. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  4. Quantitative Myocardial Perfusion with Dynamic Contrast-Enhanced Imaging in MRI and CT: Theoretical Models and Current Implementation

    Directory of Open Access Journals (Sweden)

    G. J. Pelgrim

    2016-01-01

    Full Text Available Technological advances in magnetic resonance imaging (MRI and computed tomography (CT, including higher spatial and temporal resolution, have made the prospect of performing absolute myocardial perfusion quantification possible, previously only achievable with positron emission tomography (PET. This could facilitate integration of myocardial perfusion biomarkers into the current workup for coronary artery disease (CAD, as MRI and CT systems are more widely available than PET scanners. Cardiac PET scanning remains expensive and is restricted by the requirement of a nearby cyclotron. Clinical evidence is needed to demonstrate that MRI and CT have similar accuracy for myocardial perfusion quantification as PET. However, lack of standardization of acquisition protocols and tracer kinetic model selection complicates comparison between different studies and modalities. The aim of this overview is to provide insight into the different tracer kinetic models for quantitative myocardial perfusion analysis and to address typical implementation issues in MRI and CT. We compare different models based on their theoretical derivations and present the respective consequences for MRI and CT acquisition parameters, highlighting the interplay between tracer kinetic modeling and acquisition settings.

  5. Recent evolution of theoretical models in inner shell photoionization

    International Nuclear Information System (INIS)

    Combet Farnoux, F.

    1978-01-01

    This paper is a brief review of various atomic theoretical models recently developed to calculate photoionization cross sections in the low energy range (from the far ultraviolet to the soft X ray region). For both inner and outer shells concerned, we emphasize the necessity to go beyond the independent particle models by means of the introduction of correlation effects in both initial and final states. The basic physical ideas of as elaborated models as Random Phase Approximation with exchange, Many Body Perturbation Theory and R matrix Theory are outlined and summarized. As examples, the results of some calculations are shown and compared with experiment

  6. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  7. Theoretical Basis for the CE-QUAL-W2 River Basin Model

    National Research Council Canada - National Science Library

    Wells, Scott

    2000-01-01

    This report describes the theoretical development for CE-QUAL-W2, Version 3, that will allow the application of the model to entire water basins including multiple reservoirs, steeply sloping rivers, and estuaries...

  8. A theoretical and experimental EPFM study of cracks emanating from a hole

    International Nuclear Information System (INIS)

    Broekhoven, M.J.G.

    1978-01-01

    Results are presented of a combined theoretical and experimental study on the onset of crack extension in the EPFM regime for through cracks emanating from a circular hole in a plate under tensile load, with emphasis on the applicability of the J-concept for predicting such extensions. This configuration was selected both because of its general importance and as a first approximation for a nozzle-to-vessel geometry. Theoretical investigations consisted of elastic-plastic finite element computations both for 3-point bend specimens and for plate geometry. J values were calculated using the contour-integral definition for J, and by the method of virtual crack extension. The applicability of simplified analytical approximations for J was also investigated. COD data were derived from finite element computed displacements. Experimental investigations included Jsub(Ic) tests on a series of bend specimens and crack extensions tests on a series of cracked perforated plate models. For practical reasons aluminium 2024-T 351 was selected as a suitable model material within the aims of the study. Onset of crack extension was determined by the heat-tinting procedure throughout the experiments, in some cases supplemented by fractographic investigations. The various theoretical solutions and experimental observations were compared and a number of conclusions were drawn. (author)

  9. Optimization of filtration for reduction of lung dose from Rn decay products: Part I--Theoretical

    International Nuclear Information System (INIS)

    Curling, C.A.; Rudnick, S.N.; Ryan, P.B.; Moeller, D.W.

    1990-01-01

    A theoretical model was developed for the optimization of filter characteristics that would minimize the dose from the inhalation of Rn decay products. Modified forms of the Jacobi-Porstendorfer room model and the Jacobi-Eisfeld lung dose model were chosen for use in the mathematical simulation. Optimized parameters of the filter were the thickness, solidity, and fiber diameter. For purposes of the calculations, the room dimensions, air exchange rate, particle-size distribution and concentration, and the Rn concentration were specified. The resulting computer-aided optimal design was a thin filter (the minimum thickness used in the computer model was 0.1 mm) having low solidity (the minimum solidity used was 0.5%) and large diameter fibers (the maximum diameter used was 100 microns). The simulation implies that a significant reduction in the dose rate can be achieved using a well-designed recirculating filter system. The theoretical model, using the assumption of ideal mixing, predicts an 80% reduction in the dose rate, although inherent in this assumption is the movement of 230 room volumes per hour through the fan

  10. A utility-theoretic model for QALYs and willingness to pay.

    Science.gov (United States)

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  11. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  12. Mental illness from the perspective of theoretical neuroscience.

    Science.gov (United States)

    Thagard, Paul

    2008-01-01

    Theoretical neuroscience, which characterizes neural mechanisms using mathematical and computational models, is highly relevant to central problems in the philosophy of psychiatry. These models can help to solve the explanation problem of causally connecting neural processes with the behaviors and experiences found in mental illnesses. Such explanations will also be useful for generating better classifications and treatments of psychiatric disorders. The result should help to eliminate concerns that mental illnesses such as depression and schizophrenia are not objectively real. A philosophical approach to mental illness based on neuroscience need not neglect the inherently social and historical nature of mental phenomena.

  13. Application of a two fluid theoretical plasma transport model on current tokamak reactor designs

    International Nuclear Information System (INIS)

    Ibrahim, E.; Fowler, T.K.

    1987-06-01

    In this work, the new theoretical transport models to TIBER II design calculations are described and the results are compared with recent experimental data in large tokamaks (TFTR, JET). Tang's method is extended to a two-fluid model treating ions and electrons separately. This allows for different ion and electron temperatures, as in recent low-density experiments in TFTR, and in the TIBER II design itself. The discussion is divided into two parts: (1) Development of the theoretical transport model and (2) calibration against experiments and application to TIBER II

  14. Effects of increased small-scale biomass combustion on local air quality - A theoretical dispersion modelling study

    International Nuclear Information System (INIS)

    Boman, C.

    1997-01-01

    The decided phasing out of nuclear power and the goal of reducing CO 2 emissions from fossil fuels causes a substantial estimated increase in the use of biomass fuels for energy production. Thus, a significant shift from small scale heating generated by electricity or fuel oil to biomass fuels is desirable. If a drastic deterioration of the local air quality is to be avoided, a reduction of today's emission limits is necessary. The objective of this report was therefore to describe the use of biomass fuels and small scale pellet fuel combustion, to make a theoretical study of the effects of increased pellets heating on the air quality in a residential area, and to discuss necessary emission limits for small biomass fuel plants. The general description is based on literature studies. In the theoretical study, several different dispersion model calculations were performed using the computer program Dispersion 1.1.0. The contents of tar and total hydrocarbons (THC) in the air were calculated for different scenarios with conversion from electricity to pellets and with different pellet plant performance. A sensitivity analysis was performed with additional variables and dispersion calculations according to an underlying statistical experimental design. The modeling and design computer program MODDE was used to facilitate design, evaluation and illustration of the calculated results. The results show that a substantial increase in the use of small scale pellets heating with worst calculated plant performance, will lead to a drastic increase of the content of hydrocarbons in the air. Thus, with best available performance, the content only increases marginally. Conversion from electricity to pellets, plant performance and time of year were the most influential variables. Also conversion from wood to pellets showed a significant effect, despite the small number of wood heated houses within the studied area. If a significant deterioration of the air quality is to be avoided

  15. Redesigning Orientation in an Intensive Care Unit Using 2 Theoretical Models.

    Science.gov (United States)

    Kozub, Elizabeth; Hibanada-Laserna, Maribel; Harget, Gwen; Ecoff, Laurie

    2015-01-01

    To accommodate a higher demand for critical care nurses, an orientation program in a surgical intensive care unit was revised and streamlined. Two theoretical models served as a foundation for the revision and resulted in clear clinical benchmarks for orientation progress evaluation. The purpose of the project was to integrate theoretical frameworks into practice to improve the unit orientation program. Performance improvement methods served as a framework for the revision, and outcomes were measured before and after implementation. The revised orientation program increased 1- and 2-year nurse retention and decreased turnover. Critical care knowledge increased after orientation for both the preintervention and postintervention groups. Incorporating a theoretical basis for orientation has been shown to be successful in increasing the number of nurses completing orientation and improving retention, turnover rates, and knowledge gained.

  16. Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.

    Science.gov (United States)

    Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A

    2017-12-01

    Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.

  17. Real-life applications with membrane computing

    CERN Document Server

    Zhang, Gexiang; Gheorghe, Marian

    2017-01-01

    This book thoroughly investigates the underlying theoretical basis of membrane computing models, and reveals their latest applications. In addition, to date there have been no illustrative case studies or complex real-life applications that capitalize on the full potential of the sophisticated membrane systems computational apparatus; gaps that this book remedies. By studying various complex applications – including engineering optimization, power systems fault diagnosis, mobile robot controller design, and complex biological systems involving data modeling and process interactions – the book also extends the capabilities of membrane systems models with features such as formal verification techniques, evolutionary approaches, and fuzzy reasoning methods. As such, the book offers a comprehensive and up-to-date guide for all researchers, PhDs and undergraduate students in the fields of computer science, engineering and the bio-sciences who are interested in the applications of natural computing models.

  18. An Emerging Theoretical Model of Music Therapy Student Development.

    Science.gov (United States)

    Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E

    2017-07-01

    Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  19. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  20. Is BAMM Flawed? Theoretical and Practical Concerns in the Analysis of Multi-Rate Diversification Models.

    Science.gov (United States)

    Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan

    2017-07-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for

  1. Theoretical modelling of semiconductor surfaces microscopic studies of electrons and photons

    CERN Document Server

    Srivastava, G P

    1999-01-01

    The state-of-the-art theoretical studies of ground state properties, electronic states and atomic vibrations for bulk semiconductors and their surfaces by the application of the pseudopotential method are discussed. Studies of bulk and surface phonon modes have been extended by the application of the phenomenological bond charge model. The coverage of the material, especially of the rapidly growing and technologically important topics of surface reconstruction and chemisorption, is up-to-date and beyond what is currently available in book form. Although theoretical in nature, the book provides

  2. Determination of cognitive development: postnonclassical theoretical model

    Directory of Open Access Journals (Sweden)

    Irina N. Pogozhina

    2015-09-01

    Full Text Available The aim of this research is to develop a postnonclassical cognitive processes content determination model in which mental processes are considered as open selfdeveloping, self-organizing systems. Three types of systems (dynamic, statistical, developing were analysed and compared on the basis of the description of the external and internal characteristics of causation, types of causal chains (dependent, independent and their interactions, as well as the nature of the relationship between the elements of the system (hard, probabilistic, mixed. Mechanisms of open non-equilibrium nonlinear systems (dissipative and four dissipative structures emergence conditions are described. Determination models of mental and behaviour formation and development that were developed under various theoretical approaches (associationism, behaviorism, gestaltism, psychology of intelligence by Piaget, Vygotsky culture historical approach, activity approach and others are mapped on each other as the models that describe behaviour of the three system types mentioned above. The development models of the mental sphere are shown to be different by the following criteria: 1 allocated determinants amount; 2 presence or absence of the system own activity that results in selecting the model not only external, but also internal determinants; 3 types of causal chains (dependent-independent-blended; 4 types of relationships between the causal chain that ultimately determines the subsequent system determination type as decisive (a tough dynamic pattern or stochastic (statistical regularity. The continuity of postnonclassical, classical and non-classical models of mental development determination are described. The process of gradual refinement, complexity, «absorption» of the mental determination by the latter models is characterized. The human mental can be deemed as the functioning of the open developing non-equilibrium nonlinear system (dissipative. The mental sphere is

  3. Theoretical calculations of physico-chemical and spectroscopic properties of bioinorganic systems: current limits and perspectives.

    Science.gov (United States)

    Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír

    2012-05-21

    In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.

  4. Biomolecular electrostatics—I want your solvation (model)

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P

    2012-01-01

    We review the mathematical and computational foundations for implicit-solvent models in theoretical chemistry and molecular biophysics. These models are valuable theoretical tools for studying the influence of a solvent, often water or an aqueous electrolyte, on a molecular solute such as a protein. Detailed chemical and physical aspects of implicit-solvent models have been addressed in numerous exhaustive reviews, as have numerical algorithms for simulating the most popular models. This work highlights several important conceptual developments, focusing on selected works that spotlight the need for research at the intersections between chemical, biological, mathematical, and computational physics. To introduce the field to computational scientists, we begin by describing the basic theoretical ideas of implicit-solvent models and numerical implementations. We then address practical and philosophical challenges in parameterization, and major advances that speed up calculations (covering continuum theories based on Poisson as well as faster approximate theories such as generalized Born). We briefly describe the main shortcomings of existing models, and survey promising developments that deliver improved realism in a computationally tractable way, i.e. without increasing simulation time significantly. The review concludes with a discussion of ongoing modeling challenges and relevant trends in high-performance computing and computational science. (topical review)

  5. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  6. Quantum Vertex Model for Reversible Classical Computing

    Science.gov (United States)

    Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng

    We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.

  7. Fluidelastic instability in a flexible Weir: A theoretical model

    International Nuclear Information System (INIS)

    Aita, S.; Gibert, R.J.

    1986-01-01

    A new type fluidelastic instability was discovered during the hot tests of Superphenix LMFBR. This instability is due to the fluid discharge, over a flexible weir shell which separates two of these fluid sheets (the feeding and restitution collectors). An analytical nonlinear model was realised. The flow and force sources at the top of the collectors are described and projected on the modal basis of the system formed by the collectors and the weir shell. Simplified formulas were extracted allowing a practical prediction of the stability. More generally, the complete model can be used to estimate the vibratory level when a steady state is reached by the effect of nonlinearities. Computer calculation for such a model are made with OSCAR code, part of CASTEM 2000 finite element computer system. (author)

  8. A Game-Theoretic Model of Grounding for Referential Communication Tasks

    Science.gov (United States)

    Thompson, William

    2009-01-01

    Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…

  9. Multiple condensation induced water hammer events, experiments and theoretical investigations

    International Nuclear Information System (INIS)

    Barna, Imre Ferenc; Ezsoel, Gyoergy

    2011-01-01

    We investigate steam condensation induced water hammer (CIWH) phenomena and present experimental and theoretical results. Some of the experiments were performed in the PMK-2 facility, which is a full-pressure thermalhydraulic model of the nuclear power plant of VVER-440/312 type and located in the Atomic Energy Research Institute Budapest, Hungary. Other experiments were done in the ROSA facility in Japan. On the theoretical side CIWH is studied and analyzed with the WAHA3 model based on two-phase flow six first-order partial differential equations that present one dimensional, surface averaged mass, momentum and energy balances. A second order accurate high-resolution shockcapturing numerical scheme was applied with different kind of limiters in the numerical calculations. The applied two-fluid model shows some similarities to RELAP5 which is widely used in the nuclear industry to simulate nuclear power plant accidents. New features are the existence of multiple, independent CIWH pressure peaks both in experiments and in simulations. Experimentally measured and theoretically calculated CIWH pressure peaks are in qualitative agreement. However, the computational results are very sensitive against flow velocity. (orig.)

  10. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  11. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  13. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    Science.gov (United States)

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  14. A THEORETICAL MODEL OF SUPPORTING OPEN SOURCE FRONT END INNOVATION THROUGH IDEA MANAGEMENT

    DEFF Research Database (Denmark)

    Aagaard, Annabeth

    2013-01-01

    to overcome these various challenges companies are looking for new models to support FEI. This theoretical paper explores in what way idea management may be applied as a tool in facilitation of front end innovation and how this facilitation may be captured in a conceptual model. First, I show through...... a literature study, how idea management and front end innovation are related and how they may support each other. Secondly, I present a theoretical model of how idea management may be applied in support of the open source front end of new product innovations. Thirdly, I present different venues of further...... exploration of active facilitation of open source front end innovation through idea management....

  15. Multi-party Quantum Computation

    OpenAIRE

    Smith, Adam

    2001-01-01

    We investigate definitions of and protocols for multi-party quantum computing in the scenario where the secret data are quantum systems. We work in the quantum information-theoretic model, where no assumptions are made on the computational power of the adversary. For the slightly weaker task of verifiable quantum secret sharing, we give a protocol which tolerates any t < n/4 cheating parties (out of n). This is shown to be optimal. We use this new tool to establish that any multi-party quantu...

  16. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    Science.gov (United States)

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  17. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    Science.gov (United States)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  18. A theoretical approach for energy saving in industrial steam boilers

    International Nuclear Information System (INIS)

    Sabry, T.I.; Mohamed, N.H.; Elghonimy, A.M.

    1993-01-01

    Optimization of the performance characteristics of such a steam boiler has been analyzed theoretically. Suitable thermodynamic relations have been utilized here to construct a computer model that would carry out the boiler performance characteristics at different operating parameters (e.g.; amount of excess air, fuel type, rate of blowdown preheating of combustion air and flow gases temperature). The results demonstrate that this computer model is to be used successfully in selecting the different operating parameters of the steam boiler at variant loads considering the best economical operation. Besides, this model can be used to investigate the sensitivity of the performance characteristics to the deviation of the boiler operating parameters from their optimum values. It was found also that changing the operating parameters beside the type of fuel in a boiler affects its performance characteristics. 3 figs

  19. Theoretical model for plasma expansion generated by hypervelocity impact

    International Nuclear Information System (INIS)

    Ju, Yuanyuan; Zhang, Qingming; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng

    2014-01-01

    The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T e , n e ) ∝ v p 3 . Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data

  20. Theoretical model for plasma expansion generated by hypervelocity impact

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Yuanyuan; Zhang, Qingming, E-mail: qmzhang@bit.edu.cn; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei [State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081 (China); Gong, Zizheng [National Key Laboratory of Science and Technology on Reliability and Environment Engineering, Beijing Institute of Spacecraft Environment Engineering, Beijing 100094 (China)

    2014-09-15

    The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e}) ∝ v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.

  1. STRUCTURAL AND METHODICAL MODEL OF INCREASING THE LEVEL OF THEORETICAL TRAINING OF CADETS USING INFORMATION AND COMMUNICATION TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Vladislav V. Bulgakov

    2018-03-01

    Full Text Available Features of training in higher educational institutions of system of EMERCOM of Russia demand introduction of the new educational techniques and the technical means directed on intensification of educational process, providing an opportunity of preparation of cadets at any time in the independent mode and improving quality of their theoretical knowledge. The authors have developed a structural and methodological model of increasing the level of theoretical training of cadets using information and communication technologies. The proposed structural and methodological model that includes elements to stimulate and enhance cognitive activity, allows you to generate the trajectory of theoretical training of cadets for the entire period of study at the University, to organize a systematic independent work, objective, current and final control of theoretical knowledge. The structural and methodological model for improving the level of theoretical training consists of three main elements: the base of theoretical questions, functional modules "teacher" and "cadet". The basis of the structural and methodological model of increasing the level of theoretical training of cadets is the base of theoretical issues, developed in all disciplines specialty 20.05.01 – fire safety. The functional module "teacher" allows you to create theoretical questions of various kinds, edit questions and delete them from the database if necessary, as well as create tests and monitor their implementation. The functional module "cadet" provides ample opportunities for theoretical training through independent work, testing for current and final control, the implementation of the game form of training in the form of a duel, as well as for the formation of the results of the cadets in the form of statistics and rankings. Structural and methodical model of increasing the level of theoretical training of cadets is implemented in practice in the form of a multi-level automated system

  2. Dynamics of Information as Natural Computation

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2011-08-01

    Full Text Available Processes considered rendering information dynamics have been studied, among others in: questions and answers, observations, communication, learning, belief revision, logical inference, game-theoretic interactions and computation. This article will put the computational approaches into a broader context of natural computation, where information dynamics is not only found in human communication and computational machinery but also in the entire nature. Information is understood as representing the world (reality as an informational web for a cognizing agent, while information dynamics (information processing, computation realizes physical laws through which all the changes of informational structures unfold. Computation as it appears in the natural world is more general than the human process of calculation modeled by the Turing machine. Natural computing is epitomized through the interactions of concurrent, in general asynchronous computational processes which are adequately represented by what Abramsky names “the second generation models of computation” [1] which we argue to be the most general representation of information dynamics.

  3. Theoretical yield studies on the large-scaled tongue sole, Cynoglossus macrolepidotus (Bleeker), from the Arabian sea

    Digital Repository Service at National Institute of Oceanography (India)

    Kutty, M.K.; Qasim, S.Z.

    Theoretical yield values of Cynoglossus macrolepidotus were computed from a simple Beverton and Holt type model using informations on growth and mortality rates The effects of various fishing mortality rates (F) and ages of exploitation (Tp...

  4. Supercomputer requirements for theoretical chemistry

    International Nuclear Information System (INIS)

    Walker, R.B.; Hay, P.J.; Galbraith, H.W.

    1980-01-01

    Many problems important to the theoretical chemist would, if implemented in their full complexity, strain the capabilities of today's most powerful computers. Several such problems are now being implemented on the CRAY-1 computer at Los Alamos. Examples of these problems are taken from the fields of molecular electronic structure calculations, quantum reactive scattering calculations, and quantum optics. 12 figures

  5. Control Theoretic Modeling and Generated Flow Patterns of a Fish-Tail Robot

    Science.gov (United States)

    Massey, Brian; Morgansen, Kristi; Dabiri, Dana

    2003-11-01

    Many real-world engineering problems involve understanding and manipulating fluid flows. One of the challenges to further progress in the area of active flow control is the lack of appropriate models that are amenable to control-theoretic studies and algorithm design and also incorporate reasonably realistic fluid dynamic effects. We focus here on modeling and model-verification of bio-inspired actuators (fish-fin type structures) used to control fluid dynamic artifacts that will affect speed, agility, and stealth of Underwater Autonomous Vehicles (UAVs). Vehicles using fish-tail type systems are more maneuverable, can turn in much shorter and more constrained spaces, have lower drag, are quieter and potentially more efficient than those using propellers. We will present control-theoretic models for a simple prototype coupled fluid and mechanical actuator where fluid effects are crudely modeled by assuming only lift, drag, and added mass, while neglecting boundary effects. These models will be tested with different control input parameters on an experimental fish-tail robot with the resulting flow captured with DPIV. Relations between the model, the control function choices, the obtained thrust and drag, and the corresponding flow patterns will be presented and discussed.

  6. Theoretical Characterizaiton of Visual Signatures (Muzzle Flash)

    Science.gov (United States)

    Kashinski, D. O.; Scales, A. N.; Vanderley, D. L.; Chase, G. M.; di Nallo, O. E.; Byrd, E. F. C.

    2014-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet and infrared spectra of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. We are currently employing quantum chemistry methods at various levels of sophistication to optimize molecular geometries, compute vibrational frequencies, and determine the optical spectra of specific gas-phase molecules and radicals of interest. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A comparison of computational results to experimental values found in the literature is used to assess the affect of basis set and functional choice on calculation accuracy. The current status of this work will be presented at the conference. Work supported by the ARL, and USMA.

  7. Theoretical Modeling of Magnesium Ion Imprints in the Raman Scattering of Water

    Czech Academy of Sciences Publication Activity Database

    Kapitán, J.; Dračínský, Martin; Kaminský, Jakub; Benda, Ladislav; Bouř, Petr

    2010-01-01

    Roč. 114, č. 10 (2010), s. 3574-3582 ISSN 1520-6106 R&D Projects: GA ČR GA202/07/0732; GA AV ČR IAA400550702; GA AV ČR IAA400550701; GA ČR GPP208/10/P356 Grant - others:AV ČR(CZ) M200550902 Institutional research plan: CEZ:AV0Z40550506 Keywords : Raman spectroscopy * theoretical modelling * CPMD Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.603, year: 2010

  8. Tesla Coil Theoretical Model and its Experimental Verification

    OpenAIRE

    Voitkans Janis; Voitkans Arnis

    2014-01-01

    In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple re...

  9. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  10. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  11. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  12. Theoretical analysis of the rotational barrier of ethane.

    Science.gov (United States)

    Mo, Yirong; Gao, Jiali

    2007-02-01

    The understanding of the ethane rotation barrier is fundamental for structural theory and the conformational analysis of organic molecules and requires a consistent theoretical model to differentiate the steric and hyperconjugation effects. Due to recently renewed controversies over the barrier's origin, we developed a computational approach to probe the rotation barriers of ethane and its congeners in terms of steric repulsion, hyperconjugative interaction, and electronic and geometric relaxations. Our study reinstated that the conventional steric repulsion overwhelmingly dominates the barriers.

  13. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    Science.gov (United States)

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  14. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  15. Tesla Coil Theoretical Model and its Experimental Verification

    Directory of Open Access Journals (Sweden)

    Voitkans Janis

    2014-12-01

    Full Text Available In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple reflections is developed to characterize a signal in a long line. Formulas for calculation of voltage in Tesla coil by coordinate and calculation of resonance frequencies are proposed. The theoretical calculations are verified experimentally. Resonance frequencies of Tesla coil are measured and voltage standing wave characteristics are obtained for different output capacities in the single-wire mode. Wave resistance and phase coefficient of Tesla coil is obtained. Experimental measurements show good compliance with the proposed theory. The formulas obtained in this paper are also usable for a regular two-wire long line with distributed parameters.

  16. Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test

    Science.gov (United States)

    Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.

    2006-01-01

    A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…

  17. Computation of infinite dilute activity coefficients of binary liquid alloys using complex formation model

    Energy Technology Data Exchange (ETDEWEB)

    Awe, O.E., E-mail: draweoe2004@yahoo.com; Oshakuade, O.M.

    2016-04-15

    A new method for calculating Infinite Dilute Activity Coefficients (γ{sup ∞}s) of binary liquid alloys has been developed. This method is basically computing γ{sup ∞}s from experimental thermodynamic integral free energy of mixing data using Complex formation model. The new method was first used to theoretically compute the γ{sup ∞}s of 10 binary alloys whose γ{sup ∞}s have been determined by experiments. The significant agreement between the computed values and the available experimental values served as impetus for applying the new method to 22 selected binary liquid alloys whose γ{sup ∞}s are either nonexistent or incomplete. In order to verify the reliability of the computed γ{sup ∞}s of the 22 selected alloys, we recomputed the γ{sup ∞}s using three other existing methods of computing or estimating γ{sup ∞}s and then used the γ{sup ∞}s obtained from each of the four methods (the new method inclusive) to compute thermodynamic activities of components of each of the binary systems. The computed activities were compared with available experimental activities. It is observed that the results from the method being proposed, in most of the selected alloys, showed better agreement with experimental activity data. Thus, the new method is an alternative and in certain instances, more reliable approach of computing γ{sup ∞}s of binary liquid alloys.

  18. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  19. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  20. Organizational Learning and Product Design Management: Towards a Theoretical Model.

    Science.gov (United States)

    Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael

    2003-01-01

    Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…

  1. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  2. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  3. Theoretical Models of Deliberative Democracy: A Critical Analysis

    Directory of Open Access Journals (Sweden)

    Tutui Viorel

    2015-07-01

    Full Text Available Abstract: My paper focuses on presenting and analyzing some of the most important theoretical models of deliberative democracy and to emphasize their limits. Firstly, I will mention James Fishkin‟s account of deliberative democracy and its relations with other democratic models. He differentiates between four democratic theories: competitive democracy, elite deliberation, participatory democracy and deliberative democracy. Each of these theories makes an explicit commitment to two of the following four “principles”: political equality, participation, deliberation, nontyranny. Deliberative democracy is committed to political equality and deliberation. Secondly, I will present Philip Pettit‟s view concerning the main constraints of deliberative democracy: the inclusion constraint, the judgmental constraint and the dialogical constraint. Thirdly, I will refer to Amy Gutmann and Dennis Thompson‟s conception regarding the “requirements” or characteristics of deliberative democracy: the reason-giving requirement, the accessibility of reasons, the binding character of the decisions and the dynamic nature of the deliberative process. Finally, I will discuss Joshua Cohen‟s “ideal deliberative procedure” which has the following features: it is free, reasoned, the parties are substantively equal and the procedure aims to arrive at rationally motivated consensus. After presenting these models I will provide a critical analysis of each one of them with the purpose of revealing their virtues and limits. I will make some suggestions in order to combine the virtues of these models, to transcend their limitations and to offer a more systematical account of deliberative democracy. In the next four sections I will take into consideration four main strategies for combining political and epistemic values (“optimistic”, “deliberative”, “democratic” and “pragmatic” and the main objections they have to face. In the concluding section

  4. Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    Science.gov (United States)

    Gilet, Estelle; Diard, Julien; Bessière, Pierre

    2011-01-01

    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043

  5. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  6. A paradigm for modeling and computation of gas dynamics

    Science.gov (United States)

    Xu, Kun; Liu, Chang

    2017-02-01

    In the continuum flow regime, the Navier-Stokes (NS) equations are usually used for the description of gas dynamics. On the other hand, the Boltzmann equation is applied for the rarefied flow. These two equations are based on distinguishable modeling scales for flow physics. Fortunately, due to the scale separation, i.e., the hydrodynamic and kinetic ones, both the Navier-Stokes equations and the Boltzmann equation are applicable in their respective domains. However, in real science and engineering applications, they may not have such a distinctive scale separation. For example, around a hypersonic flying vehicle, the flow physics at different regions may correspond to different regimes, where the local Knudsen number can be changed significantly in several orders of magnitude. With a variation of flow physics, theoretically a continuous governing equation from the kinetic Boltzmann modeling to the hydrodynamic Navier-Stokes dynamics should be used for its efficient description. However, due to the difficulties of a direct modeling of flow physics in the scale between the kinetic and hydrodynamic ones, there is basically no reliable theory or valid governing equations to cover the whole transition regime, except resolving flow physics always down to the mean free path scale, such as the direct Boltzmann solver and the Direct Simulation Monte Carlo (DSMC) method. In fact, it is an unresolved problem about the exact scale for the validity of the NS equations, especially in the small Reynolds number cases. The computational fluid dynamics (CFD) is usually based on the numerical solution of partial differential equations (PDEs), and it targets on the recovering of the exact solution of the PDEs as mesh size and time step converging to zero. This methodology can be hardly applied to solve the multiple scale problem efficiently because there is no such a complete PDE for flow physics through a continuous variation of scales. For the non-equilibrium flow study, the direct

  7. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  8. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    Energy Technology Data Exchange (ETDEWEB)

    Valach, M; Zymak, J; Svoboda, R [Nuclear Research Inst. Rez plc, Rez (Czech Republic)

    1997-08-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs.

  9. Computer modelling of the WWER fuel elements under high burnup conditions by the computer codes PIN-W and RODQ2D

    International Nuclear Information System (INIS)

    Valach, M.; Zymak, J.; Svoboda, R.

    1997-01-01

    This paper presents the development status of the computer codes for the WWER fuel elements thermomechanical behavior modelling under high burnup conditions at the Nuclear Research Institute Rez. The accent is given on the analysis of the results from the parametric calculations, performed by the programmes PIN-W and RODQ2D, rather than on their detailed theoretical description. Several new optional correlations for the UO2 thermal conductivity with degradation effect caused by burnup were implemented into the both codes. Examples of performed calculations document differences between previous and new versions of both programmes. Some recommendations for further development of the codes are given in conclusion. (author). 6 refs, 9 figs

  10. 4. Valorizations of Theoretical Models of Giftedness and Talent in Defining of Artistic Talent

    OpenAIRE

    Anghel Ionica Ona

    2016-01-01

    Artistic talent has been defined in various contexts and registers a variety of meanings, more or less operational. From the perspective of pedagogical intervention, it is imperative understanding artistic talent trough the theoretical models of giftedness and talent. So, the aim of the study is to realize a review of the most popular of the theoretical models of giftedness and talent, with identification of the place of artistic talent and the new meanings that artistic talent has in each on...

  11. Experimental-theoretical analysis of laminar internal forced convection with nanofluids

    Energy Technology Data Exchange (ETDEWEB)

    Cerqueira, Ivana G.; Cotta, Renato M. [Lab. of Transmission and Technology of Heat-LTTC. Mechanical Eng. Dept. - POLI and COPPE/UFRJ, Rio de Janeiro, RJ (Brazil)], E-mail: cotta@mecanica.coppe.ufrj.br; Mota, Carlos Alberto A. [Conselho Nacional de Pesquisas - CNPq, Brasilia, DF (Brazil)], e-mail: carlosal@cnpq.br; Nunes, Jeziel S. [INPI, Rio de Janeiro, RJ (Brazil)], e-mail: jeziel@inpi.gov.br

    2010-07-01

    This work reports fundamental experimental-theoretical research related to heat transfer enhancement in laminar channel flow with nanofluids, which are essentially modifications of the base fluid with the dispersion of metal oxide nanoparticles. The theoretical work was performed by making use of mixed symbolic-numerical computation (Mathematica 7.0 platform) and a hybrid numerical-analytical methodology (Generalized Integral Transform Technique - GITT) in accurately handling the governing partial differential equations for the heat and fluid flow problem formulation with temperature dependency in all the thermophysical properties. Experimental work was also undertaken based on a thermohydraulic circuit built for this purpose, and sample results are presented to verify the proposed model. The aim is to illustrate detailed modeling and robust simulation attempting to reach an explanation of the controversial heat transfer enhancement observed in laminar forced convection with nanofluids. (author)

  12. Recent development in computational actinide chemistry

    International Nuclear Information System (INIS)

    Li Jun

    2008-01-01

    Ever since the Manhattan project in World War II, actinide chemistry has been essential for nuclear science and technology. Yet scientists still seek the ability to interpret and predict chemical and physical properties of actinide compounds and materials using first-principle theory and computational modeling. Actinide compounds are challenging to computational chemistry because of their complicated electron correlation effects and relativistic effects, including spin-orbit coupling effects. There have been significant developments in theoretical studies on actinide compounds in the past several years. The theoretical capabilities coupled with new experimental characterization techniques now offer a powerful combination for unraveling the complexities of actinide chemistry. In this talk, we will provide an overview of our own research in this field, with particular emphasis on applications of relativistic density functional and ab initio quantum chemical methods to the geometries, electronic structures, spectroscopy and excited-state properties of small actinide molecules such as CUO and UO 2 and some large actinide compounds relevant to separation and environment science. The performance of various density functional approaches and wavefunction theory-based electron correlation methods will be compared. The results of computational modeling on the vibrational, electronic, and NMR spectra of actinide compounds will be briefly discussed as well [1-4]. We will show that progress in relativistic quantum chemistry, computer hardware and computational chemistry software has enabled computational actinide chemistry to emerge as a powerful and predictive tool for research in actinide chemistry. (authors)

  13. A Primer on Theoretically Exploring the Field of Business Model Innovation

    OpenAIRE

    Gassmann, Oliver; Frankenberger, Karolin; Sauer, Roman

    2017-01-01

    Companies like Amazon, Uber, and Skype have become business strategy icons and the way they transformed industries can hardly be explained with classic strategy research. This article explores the topic of Business Model Innovation, which has become the cornerstone for the competitiveness of many successful firms, from a theoretical perspective. It gives an overview and introduction to the book "Exploring the Field of Business Model Innovation".

  14. Program POD; A computer code to calculate nuclear elastic scattering cross sections with the optical model and neutron inelastic scattering cross sections by the distorted-wave born approximation

    International Nuclear Information System (INIS)

    Ichihara, Akira; Kunieda, Satoshi; Chiba, Satoshi; Iwamoto, Osamu; Shibata, Keiichi; Nakagawa, Tsuneo; Fukahori, Tokio; Katakura, Jun-ichi

    2005-07-01

    The computer code, POD, was developed to calculate angle-differential cross sections and analyzing powers for shape-elastic scattering for collisions of neutron or light ions with target nucleus. The cross sections are computed with the optical model. Angle-differential cross sections for neutron inelastic scattering can also be calculated with the distorted-wave Born approximation. The optical model potential parameters are the most essential inputs for those model computations. In this program, the cross sections and analyzing powers are obtained by using the existing local or global parameters. The parameters can also be inputted by users. In this report, the theoretical formulas, the computational methods, and the input parameters are explained. The sample inputs and outputs are also presented. (author)

  15. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  16. Heterotic quantum and classical computing on convergence spaces

    Science.gov (United States)

    Patten, D. R.; Jakel, D. W.; Irwin, R. J.; Blair, H. A.

    2015-05-01

    Category-theoretic characterizations of heterotic models of computation, introduced by Stepney et al., combine computational models such as classical/quantum, digital/analog, synchronous/asynchronous, etc. to obtain increased computational power. A highly informative classical/quantum heterotic model of computation is represented by Abramsky's simple sequential imperative quantum programming language which extends the classical simple imperative programming language to encompass quantum computation. The mathematical (denotational) semantics of this classical language serves as a basic foundation upon which formal verification methods can be developed. We present a more comprehensive heterotic classical/quantum model of computation based on heterotic dynamical systems on convergence spaces. Convergence spaces subsume topological spaces but admit finer structure from which, in prior work, we obtained differential calculi in the cartesian closed category of convergence spaces allowing us to define heterotic dynamical systems, given by coupled systems of first order differential equations whose variables are functions from the reals to convergence spaces.

  17. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  18. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  19. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  20. Theoretical and Computational Analyses of Bernoulli Levitation Flows

    International Nuclear Information System (INIS)

    Nam, Jong Soon; Kim, Gyu Wan; Kim, Jin Hyeon; Kim, Heuy Dong

    2013-01-01

    Pneumatic levitation is based upon Bernoulli's principle. However, this method is known to require a large gas flow rate that can lead to an increase in the cost of products. In this case, the gas flow rate should be increased, and the compressible effects of the gas may be of practical importance. In the present study, a computational fluid dynamics method has been used to obtain insights into Bernoulli levitation flows. Three-dimensional compressible Navier-Stokes equations in combination with the SST k-ω turbulence model were solved using a fully implicit finite volume scheme. The gas flow rate, work piece diameter,and clearance gap between the work piece and the circular cylinder were varied to investigate the flow characteristics inside. It is known that there is an optimal clearance gap for the lifting force and that increasing the supply gas flow rate results in a larger lifting force

  1. Theoretical and Computational Analyses of Bernoulli Levitation Flows

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Jong Soon; Kim, Gyu Wan; Kim, Jin Hyeon; Kim, Heuy Dong [Andong Nat' l Univ., Andong (Korea, Republic of)

    2013-07-15

    Pneumatic levitation is based upon Bernoulli's principle. However, this method is known to require a large gas flow rate that can lead to an increase in the cost of products. In this case, the gas flow rate should be increased, and the compressible effects of the gas may be of practical importance. In the present study, a computational fluid dynamics method has been used to obtain insights into Bernoulli levitation flows. Three-dimensional compressible Navier-Stokes equations in combination with the SST k-{omega} turbulence model were solved using a fully implicit finite volume scheme. The gas flow rate, work piece diameter,and clearance gap between the work piece and the circular cylinder were varied to investigate the flow characteristics inside. It is known that there is an optimal clearance gap for the lifting force and that increasing the supply gas flow rate results in a larger lifting force.

  2. Theoretical study of near-threshold electron-molecule scattering

    International Nuclear Information System (INIS)

    Morrison, M.A.

    1989-01-01

    We have been engaged in carrying out a foundation study on problems pertaining to near-threshold nuclear excitations in e-H 2 scattering. The primary goals of this study are: to investigate the severity and nature of the anticipated breakdown of the adiabatic-nuclei (AN) approximation, first for rotation only (in the rigid-rotator approximation), and then for vibration; to determine a data base of accurate ab initio cross sections for this important system; to implement and test accurate, computationally-tractable model potentials for exchange and polarization effects; and to begin the exploration of alternative scattering theories for near-threshold collisions. This study has provided a well-defined theoretical context for our future investigations. Second, it has enabled us to identify and quantify several serious problems in the theory of near-threshold electron-molecule scattering that demand attention. And finally, it has led to the development of some of the theoretical and computational apparatus that will form the foundation of future work. In this report, we shall review our progress to date, emphasizing work completed during the current contract year. 17 refs., 5 figs., 1 tab

  3. Adaptive information-theoretic bounded rational decision-making with parametric priors

    OpenAIRE

    Grau-Moya, Jordi; Braun, Daniel A.

    2015-01-01

    Deviations from rational decision-making due to limited computational resources have been studied in the field of bounded rationality, originally proposed by Herbert Simon. There have been a number of different approaches to model bounded rationality ranging from optimality principles to heuristics. Here we take an information-theoretic approach to bounded rationality, where information-processing costs are measured by the relative entropy between a posterior decision strategy and a given fix...

  4. Predicting Freshman Persistence and Voluntary Dropout Decisions from a Theoretical Model.

    Science.gov (United States)

    Pascarella, Ernest T.; Terenzini, Patrick T.

    1980-01-01

    A five-scale instrument developed from a theoretical model of college attrition correctly identified the persistence/voluntary withdrawal decisions of 78.5 percent of 773 freshmen in a large, residential university. Findings showed that student relationships with faculty were particularly important. (Author/PHR)

  5. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  6. Computational physics problem solving with Python

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2015-01-01

    The use of computation and simulation has become an essential part of the scientific process. Being able to transform a theory into an algorithm requires significant theoretical insight, detailed physical and mathematical understanding, and a working level of competency in programming. This upper-division text provides an unusually broad survey of the topics of modern computational physics from a multidisciplinary, computational science point of view. Its philosophy is rooted in learning by doing (assisted by many model programs), with new scientific materials as well as with the Python progr

  7. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    OpenAIRE

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of p...

  8. THEORETICAL AND PRACTICAL ASPECTS OF IDENTIFICATION AND EVALUATION OF SCHOOL EDUCATION QUALITY

    Directory of Open Access Journals (Sweden)

    N. Y. Yerganova

    2014-01-01

    Full Text Available The paper considers one of the main theoretical and practical pedagogical problems of education quality assessment. The quality measurement depends on successful identification of genuine (scientific and false diagnostic methods; the process becomes more complicated in case of latent variables. As a solution, the authors recommend the Rasch measurement model for identifying an integral indicator of education quality. The method in question involves designing, approbation and analysis of diagnostic materials, as well as mathematical and statistical data processing based on specialized computer software. The paper describes the advantages and theoretical potential of the Rasch method, and emphasizes its capacity for solving the key problem of quality modeling – i.e. suitability and utility of the indicator variables for the given research.

  9. Computational Ocean Acoustics

    CERN Document Server

    Jensen, Finn B; Porter, Michael B; Schmidt, Henrik

    2011-01-01

    Since the mid-1970s, the computer has played an increasingly pivotal role in the field of ocean acoustics. Faster and less expensive than actual ocean experiments, and capable of accommodating the full complexity of the acoustic problem, numerical models are now standard research tools in ocean laboratories. The progress made in computational ocean acoustics over the last thirty years is summed up in this authoritative and innovatively illustrated new text. Written by some of the field's pioneers, all Fellows of the Acoustical Society of America, Computational Ocean Acoustics presents the latest numerical techniques for solving the wave equation in heterogeneous fluid–solid media. The authors discuss various computational schemes in detail, emphasizing the importance of theoretical foundations that lead directly to numerical implementations for real ocean environments. To further clarify the presentation, the fundamental propagation features of the techniques are illustrated in color. Computational Ocean A...

  10. Developing the next generation of diverse computer scientists: the need for enhanced, intersectional computing identity theory

    Science.gov (United States)

    Rodriguez, Sarah L.; Lehman, Kathleen

    2017-10-01

    This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.

  11. Review of computer models used for post closure safety assessment of nuclear waste repositories in the FRG

    International Nuclear Information System (INIS)

    Bogorinski, P.; Baltes, B.; Martens, K.H.

    1987-01-01

    In the FRG, disposal of nuclear wastes takes place in deep geologic formations. For longterm safety assessment of such a repository, groundwater transport provides a release scenario for the radionuclides to the biosphere. GRs reviewed a methodology that was implemented by the research group of PSE to simulate migration of radionuclides in the geosphere. The examination included the applicability of theoretical models, numerical experiments, comparison to results of diverse computer codes as well as experience from international intercomparison studies. The review concluded that the hydrological model may be applied to full extent unless density effects have to be considered whereas there are some restrictions in the use of the nuclide transport model

  12. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  13. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  14. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  15. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...

  16. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    Science.gov (United States)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  17. Editorial: Modelling and computational challenges in granular materials

    OpenAIRE

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss the current progress and latest advancements in the field of advanced numerical methods and modelling of granular materials. The focus will be on computational methods, improved algorithms and the m...

  18. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  19. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  20. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  1. A Categorisation of Cloud Computing Business Models

    OpenAIRE

    Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David

    2010-01-01

    This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...

  2. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    Science.gov (United States)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of

  3. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    Science.gov (United States)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  4. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...

  5. A computer simulation model to compute the radiation transfer of mountainous regions

    Science.gov (United States)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  6. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  7. How cells engulf: a review of theoretical approaches to phagocytosis

    Science.gov (United States)

    Richards, David M.; Endres, Robert G.

    2017-12-01

    Phagocytosis is a fascinating process whereby a cell surrounds and engulfs particles such as bacteria and dead cells. This is crucial both for single-cell organisms (as a way of acquiring nutrients) and as part of the immune system (to destroy foreign invaders). This whole process is hugely complex and involves multiple coordinated events such as membrane remodelling, receptor motion, cytoskeleton reorganisation and intracellular signalling. Because of this, phagocytosis is an excellent system for theoretical study, benefiting from biophysical approaches combined with mathematical modelling. Here, we review these theoretical approaches and discuss the recent mathematical and computational models, including models based on receptors, models focusing on the forces involved, and models employing energetic considerations. Along the way, we highlight a beautiful connection to the physics of phase transitions, consider the role of stochasticity, and examine links between phagocytosis and other types of endocytosis. We cover the recently discovered multistage nature of phagocytosis, showing that the size of the phagocytic cup grows in distinct stages, with an initial slow stage followed by a much quicker second stage starting around half engulfment. We also address the issue of target shape dependence, which is relevant to both pathogen infection and drug delivery, covering both one-dimensional and two-dimensional results. Throughout, we pay particular attention to recent experimental techniques that continue to inform the theoretical studies and provide a means to test model predictions. Finally, we discuss population models, connections to other biological processes, and how physics and modelling will continue to play a key role in future work in this area.

  8. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    Science.gov (United States)

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  9. Computational tools for experimental determination and theoretical prediction of protein structure

    Energy Technology Data Exchange (ETDEWEB)

    O`Donoghue, S.; Rost, B.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. The authors intend to review the state of the art in the experimental determination of protein 3D structure (focus on nuclear magnetic resonance), and in the theoretical prediction of protein function and of protein structure in 1D, 2D and 3D from sequence. All the atomic resolution structures determined so far have been derived from either X-ray crystallography (the majority so far) or Nuclear Magnetic Resonance (NMR) Spectroscopy (becoming increasingly more important). The authors briefly describe the physical methods behind both of these techniques; the major computational methods involved will be covered in some detail. They highlight parallels and differences between the methods, and also the current limitations. Special emphasis will be given to techniques which have application to ab initio structure prediction. Large scale sequencing techniques increase the gap between the number of known proteins sequences and that of known protein structures. They describe the scope and principles of methods that contribute successfully to closing that gap. Emphasis will be given on the specification of adequate testing procedures to validate such methods.

  10. Handbook of computational quantum chemistry

    CERN Document Server

    Cook, David B

    2005-01-01

    Quantum chemistry forms the basis of molecular modeling, a tool widely used to obtain important chemical information and visual images of molecular systems. Recent advances in computing have resulted in considerable developments in molecular modeling, and these developments have led to significant achievements in the design and synthesis of drugs and catalysts. This comprehensive text provides upper-level undergraduates and graduate students with an introduction to the implementation of quantum ideas in molecular modeling, exploring practical applications alongside theoretical explanations.Wri

  11. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  12. Trust models in ubiquitous computing.

    Science.gov (United States)

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  13. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  14. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  15. Theoretical-empirical model of the steam-water cycle of the power unit

    Directory of Open Access Journals (Sweden)

    Grzegorz Szapajko

    2010-06-01

    Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.

  16. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  17. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  18. Theoretical tools for B physics

    International Nuclear Information System (INIS)

    Mannel, T.

    2006-01-01

    In this talk I try to give an overview over the theoretical tools used to compute observables in B physics. The main focus is the developments in the 1/m Expansion in semileptonic and nonleptonic decays. (author)

  19. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  20. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. Theoretical models of DNA flexibility

    Czech Academy of Sciences Publication Activity Database

    Dršata, Tomáš; Lankaš, Filip

    2013-01-01

    Roč. 3, č. 4 (2013), s. 355-363 ISSN 1759-0876 Institutional support: RVO:61388963 Keywords : molecular dynamics simulations * base pair level * indirect readout Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 9.041, year: 2013

  2. Modeling soft factors in computer-based wargames

    Science.gov (United States)

    Alexander, Steven M.; Ross, David O.; Vinarskai, Jonathan S.; Farr, Steven D.

    2002-07-01

    Computer-based wargames have seen much improvement in recent years due to rapid increases in computing power. Because these games have been developed for the entertainment industry, most of these advances have centered on the graphics, sound, and user interfaces integrated into these wargames with less attention paid to the game's fidelity. However, for a wargame to be useful to the military, it must closely approximate as many of the elements of war as possible. Among the elements that are typically not modeled or are poorly modeled in nearly all military computer-based wargames are systematic effects, command and control, intelligence, morale, training, and other human and political factors. These aspects of war, with the possible exception of systematic effects, are individually modeled quite well in many board-based commercial wargames. The work described in this paper focuses on incorporating these elements from the board-based games into a computer-based wargame. This paper will also address the modeling and simulation of the systemic paralysis of an adversary that is implied by the concept of Effects Based Operations (EBO). Combining the fidelity of current commercial board wargames with the speed, ease of use, and advanced visualization of the computer can significantly improve the effectiveness of military decision making and education. Once in place, the process of converting board wargames concepts to computer wargames will allow the infusion of soft factors into military training and planning.

  3. A theoretical model of the M87 jet

    International Nuclear Information System (INIS)

    Falle, S.A.E.G.; Wilson, M.J.

    1985-01-01

    This paper describes a theoretical model of the knots in the M87 jet based on the idea that it is a steady fluid jet propagating through a non-uniform atmosphere. It is argued that knots D, E and F can be explained by the jet being underexpanded as it emerges from the central source, while knot A is due to reconfinement of the jet. Very high resolution numerical calculations are used to show that good agreement with the observed positions of the knots can be obtained with reasonable jet parameters and an atmosphere consistent with the X-ray observations. (author)

  4. Theoretical model of the SOS effect

    Energy Technology Data Exchange (ETDEWEB)

    Darznek, S A; Mesyats, G A; Rukin, S N; Tsiranov, S N [Russian Academy of Sciences, Ural Division, Ekaterinburg (Russian Federation). Institute of Electrophysics

    1997-12-31

    Physical principles underlying the operation of semiconductor opening switches (SOS) are highlighted. The SOS effect occurs at a current density of up to 60 kA/cm{sup 2} in silicon p{sup +}-p-n-n{sup +} structures filled with residual electron-hole plasma. Using a theoretical model developed for plasma dynamic calculations, the mechanism by which current passes through the structure at the stage of high conduction and the processes that take place at the stage of current interruption were analyzed. The dynamics of the processes taking place in the structure was calculated with allowance for both diffusive and drift mechanisms of carrier transport. In addition, two recombination types, viz. recombination via impurities and impact Auger recombination, were included in the model. The effect of the structure on the pumping-circuit current and voltage was also taken into account. The real distribution of the doped impurity in the structure and the avalanche mechanism of carrier multiplication were considered. The results of calculations of a typical SOS are presented. The dynamics of the electron-hole plasma is analyzed. It is shown that the SOS effect represents a qualitatively new mechanism of current interruption in semiconductor structures. (author). 4 figs., 7 refs.

  5. A theoretical model of speed-dependent steering torque for rolling tyres

    Science.gov (United States)

    Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing

    2016-04-01

    It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.

  6. Lessons Learned From the Development and Parameterization of a Computer Simulation Model to Evaluate Task Modification for Health Care Providers.

    Science.gov (United States)

    Kasaie, Parastu; David Kelton, W; Ancona, Rachel M; Ward, Michael J; Froehle, Craig M; Lyons, Michael S

    2018-02-01

    Computer simulation is a highly advantageous method for understanding and improving health care operations with a wide variety of possible applications. Most computer simulation studies in emergency medicine have sought to improve allocation of resources to meet demand or to assess the impact of hospital and other system policies on emergency department (ED) throughput. These models have enabled essential discoveries that can be used to improve the general structure and functioning of EDs. Theoretically, computer simulation could also be used to examine the impact of adding or modifying specific provider tasks. Doing so involves a number of unique considerations, particularly in the complex environment of acute care settings. In this paper, we describe conceptual advances and lessons learned during the design, parameterization, and validation of a computer simulation model constructed to evaluate changes in ED provider activity. We illustrate these concepts using examples from a study focused on the operational effects of HIV screening implementation in the ED. Presentation of our experience should emphasize the potential for application of computer simulation to study changes in health care provider activity and facilitate the progress of future investigators in this field. © 2017 by the Society for Academic Emergency Medicine.

  7. Theoretical Model of Pricing Behavior on the Polish Wholesale Fuel Market

    Directory of Open Access Journals (Sweden)

    Bejger Sylwester

    2016-12-01

    Full Text Available In this paper, we constructed a theoretical model of strategic pricing behavior of the players in a Polish wholesale fuel market. This model is consistent with the characteristics of the industry, the wholesale market, and the players. The model is based on the standard methodology of repeated games with a built-in adjustment to a focal price, which resembles the Import Parity Pricing (IPP mechanism. From the equilibrium of the game, we conclude that the focal price policy implies a parallel pricing strategic behavior on the market.

  8. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    Science.gov (United States)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  9. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  10. Exploring patient satisfaction predictors in relation to a theoretical model.

    Science.gov (United States)

    Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil

    2013-01-01

    The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.

  11. [Self-Determination in Medical Rehabilitation - Development of a Conceptual Model for Further Theoretical Discussion].

    Science.gov (United States)

    Senin, Tatjana; Meyer, Thorsten

    2018-01-22

    Aim was to gather theoretical knowledge about self-determination and to develop a conceptual model for medical rehabilitation- which serves as a basis for discussion. We performed a literature research in electronic databases. Various theories and research results were adopted and transferred to the context of medical rehabilitation and into a conceptual model. The conceptual model of self-determination reflects on a continuum which forms of self-determination may be present in situations of medical rehabilitation treatments. The location on the continuum depends theoretically on the manifestation of certain internal and external factors that may influence each other. The model provides a first conceptualization of self-determination focusing on medical rehabilitation which should be further refined and tested empirically. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Prodeto, a computer code for probabilistic fatigue design

    Energy Technology Data Exchange (ETDEWEB)

    Braam, H [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C J; Thoegersen, M L [Risoe National Lab., Roskilde (Denmark); Ronold, K O [Det Norske Veritas, Hoevik (Norway)

    1999-03-01

    A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)

  13. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  14. Theoretical model of polar cap auroral arcs

    International Nuclear Information System (INIS)

    Kan, J.R.; Burke, W.J.; USAF, Bedford, MA)

    1985-01-01

    A theory of the polar cap auroral arcs is proposed under the assumption that the magnetic field reconnection occurs in the cusp region on tail field lines during northward interplanetary magnetic field (IMF) conditions. Requirements of a convection model during northward IMF are enumerated based on observations and fundamental theoretical considerations. The theta aurora can be expected to occur on the closed field lines convecting sunward in the central polar cap, while the less intense regular polar cap arcs can occur either on closed or open field lines. The dynamo region for the polar cap arcs is required to be on closed field lines convecting tailward in the plasma sheet which is magnetically connected to the sunward convection in the central polar cap. 43 references

  15. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  16. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  17. Computational Biochemistry-Enzyme Mechanisms Explored.

    Science.gov (United States)

    Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias

    2017-01-01

    Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.

  18. NMR relaxation induced by iron oxide particles: testing theoretical models.

    Science.gov (United States)

    Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L

    2016-04-15

    Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.

  19. Theoretical Physics Division progress report

    International Nuclear Information System (INIS)

    1989-01-01

    The research areas covered in this report are solid state and quantum physics, theoretical metallurgy, fuel modelling and reactor materials, statistical physics and the theory of fluids. Attention is drawn to a number of items: (i) the application of theories of aerosol behaviour to the interpretation of conditions in the cover-gas space of a fast reactor; (ii) studies in non-linear dynamics, dynamical instabilities and chaotic behaviour covering for example, fluid behaviour in Taylor-Couette experiments, non-linear behaviour in electronic circuits and reaction-diffusion systems; (iii) the development of finite element computational techniques to describe the periodic behaviour of a system after a Hopf bifurcation and in simulating solidification processes; (iv) safety assessment of disposal concepts for low- and intermediate-level radioactive wastes. (U.K.)

  20. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...