WorldWideScience

Sample records for model structure computation

  1. Workshop on Computational Modelling Techniques in Structural ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:

  2. The role of computer modelling in structural integrity assessment

    International Nuclear Information System (INIS)

    Sauve, R.G.

    2002-01-01

    There is little doubt that computer technology has spawned extraordinary advances in the traditional fields of science and engineering along with the introduction of new disciplines and technologies. In particular, significant developments directly related to modern computer technology that have had a profound impact on the field of structural integrity include: Computational methods (probabilistic, parametric, data analysis); Finite Element Technique; and, Computer-Aided Design and Engineering. In fact it can be argued that these developments have re-defined and expanded the role of structural integrity assessments by providing comprehensive modelling capabilities to the designer and engineers involved in failure analyses. As computer processing speeds and capacity have increased, so has the role of computer modelling in assessments of component structural integrity. While innovation in these fields has been packaged into various CAE software used by the engineering community, the advantages of simulation have only just begun to be realized. With new product development cycles shrinking with the view to improving time-to-market, the role of initial testing is being reduced in favour of computer modelling and simulation to assess component life and durability. For ageing structures, the evaluation of remaining life and the impact of degraded structural integrity becomes tractable with state-of-the-art computational methods. Needless to say, for complex structures, computer modelling coupled with testing provides a robust method that can avoid costly and sometimes fatal errors in design. Computer modelling brings together a number of disciplines including numerical techniques such as the finite element method, fracture mechanics, continuum mechanics, dynamics, heat transfer, structural reliability and probabilistic methods. One of the salient features of the current methods is the ability to handle large complex steady state or transient dynamic problems that

  3. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Jane

    2011-07-25

    Jul 25, 2011 ... Our study was aimed towards computational proteomic analysis and 3D structural modeling of this novel bacteriocin protein encoded by the earlier aforementioned gene. Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling ...

  4. Computational modeling of RNA 3D structures and interactions.

    Science.gov (United States)

    Dawson, Wayne K; Bujnicki, Janusz M

    2016-04-01

    RNA molecules have key functions in cellular processes beyond being carriers of protein-coding information. These functions are often dependent on the ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is difficult, which has prompted the development of computational methods for structure prediction from sequence. Recent progress in 3D structure modeling of RNA and emerging approaches for predicting RNA interactions with ions, ligands and proteins have been stimulated by successes in protein 3D structure modeling. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  6. Algebraic Modeling of Topological and Computational Structures and Applications

    CERN Document Server

    Theodorou, Doros; Stefaneas, Petros; Kauffman, Louis

    2017-01-01

    This interdisciplinary book covers a wide range of subjects, from pure mathematics (knots, braids, homotopy theory, number theory) to more applied mathematics (cryptography, algebraic specification of algorithms, dynamical systems) and concrete applications (modeling of polymers and ionic liquids, video, music and medical imaging). The main mathematical focus throughout the book is on algebraic modeling with particular emphasis on braid groups. The research methods include algebraic modeling using topological structures, such as knots, 3-manifolds, classical homotopy groups, and braid groups. The applications address the simulation of polymer chains and ionic liquids, as well as the modeling of natural phenomena via topological surgery. The treatment of computational structures, including finite fields and cryptography, focuses on the development of novel techniques. These techniques can be applied to the design of algebraic specifications for systems modeling and verification. This book is the outcome of a w...

  7. Using computational models to relate structural and functional brain connectivity

    Czech Academy of Sciences Publication Activity Database

    Hlinka, Jaroslav; Coombes, S.

    2012-01-01

    Roč. 36, č. 2 (2012), s. 2137-2145 ISSN 0953-816X R&D Projects: GA MŠk 7E08027 EU Projects: European Commission(XE) 200728 - BRAINSYNC Institutional research plan: CEZ:AV0Z10300504 Keywords : brain disease * computational modelling * functional connectivity * graph theory * structural connectivity Subject RIV: FH - Neurology Impact factor: 3.753, year: 2012

  8. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...... construction. The complex system of music structure is described as an interaction between modules focusing on formal operations that are conceived as general as possible. Each module addresses a core aspect of music analysis and offers some innovative breakthrough compared to the state of the art. In order......, offering hence an implementation of “Time-Span Reduction”. Parallelism, i.e. sequential repetition, considered here as an essential aspect of music analysis, is applied to the search for motives, for mode-related patterns as well as metrical analysis. A new approach for modal analysis is based...

  9. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...

  10. Selected Aspects of Computer Modeling of Reinforced Concrete Structures

    Directory of Open Access Journals (Sweden)

    Szczecina M.

    2016-03-01

    Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.

  11. COMPUTER TECHNOLOGIES IN THE FORMATION OF COMPUTED MODELS OF MONOLITHIC REINFORCED CONCRETE STRUCTURES

    Directory of Open Access Journals (Sweden)

    Anatoliy I. Bedov

    2017-12-01

    Full Text Available The areas of application of concrete and reinforcement of higher grades for strength in structural elements of a monolithic reinforced concrete frame are considered. Analytic dependencies, criteria and boundary conditions are proposed that numerically describe the relationship between increasing the strength of concrete and reducing the consumption of reinforcing steel for bent and compressed-bent elements. Calculation-analytical models of the deformation state of overlaps of a monolithic reinforced concrete multi-storey frame have been developed on the basis of multifactor numerical studies carried out for various values of the thicknesses of ceilings, spans, operating loads, classes of concrete and reinforcement. Calculated parameters of slabs are determined, which determine their bearing capacity. On the basis of computer technology, the optimum section of a reinforced concrete element is modeled according to the criterion of reducing the material consumption and rational combination of classes of concrete and reinforcement.

  12. Computer modeling of magnetic structure for IC-35 cyclotron

    International Nuclear Information System (INIS)

    Alenitskij, Yu.G.; Morozov, N.A.

    1998-01-01

    An extensive series of calculations has been carried out in order to design the magnetic structure of the IC-35 cyclotron for radioisotope production. The calculations were carried out by 2-D POISCR code. The average magnetic field and its variation were produced with the help of two different calculation models. The parameters of the cyclotron magnetic system are presented

  13. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    Energy Technology Data Exchange (ETDEWEB)

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  14. Computational modelling of hydrogen embrittlement in welded structures

    Science.gov (United States)

    Barrera, O.; Cocks, A. C. F.

    2013-07-01

    This paper deals with the modelling of the combined hydrogen embrittlement phenomena: hydrogen-enhanced local plasticity (HELP) and hydrogen-induced decohesion (HID) in dissimilar welds through a cohesive zone modelling approach (CZM). Fractured samples of dissimilar weld interfaces in AISI8630/IN625 show that cracks propagate in a region called the "featureless" region located in the Nickel side of the weld. This region is characterized by the presence of a distribution of fine ? carbides. We model the effect of hydrogen on the material toughness as the result of a synergistic effect of HELP and HID mechanisms where (i) hydrogen enhanced dislocation mobility promotes the development of dislocation structures at the ? carbides which increases the stress on the particles; while the presence of hydrogen also results in (ii) a reduction in the (a) cohesive strength of the carbide/matrix interface and (b) in the local flow stress of the matrix. The decohesion mechanism at the carbide/matrix interface is modelled through a two-dimensional user-defined cohesive element implemented in a FORTRAN subroutine (UEL) in the commercial finite element code ABAQUS and the effect of the hydrogen on the plasticity properties of the matrix is coded in a UMAT routine. Preliminary analysis on a unit cell representing the matrix/carbide system under plane strain shows that HELP and HID are competitive mechanisms. When the combined mechanism HELP+HID occurs microcracks form at the matrix/carbide interface due to decohesion process followed by localization of plastic flow responsible for the link-up of the microcracks.

  15. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  16. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    Science.gov (United States)

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are

  17. Computers boost structural technology

    Science.gov (United States)

    Noor, Ahmed K.; Venneri, Samuel L.

    1989-01-01

    Derived from matrix methods of structural analysis and finite element methods developed over the last three decades, computational structures technology (CST) blends computer science, numerical analysis, and approximation theory into structural analysis and synthesis. Recent significant advances in CST include stochastic-based modeling, strategies for performing large-scale structural calculations on new computing systems, and the integration of CST with other disciplinary modules for multidisciplinary analysis and design. New methodologies have been developed at NASA for integrated fluid-thermal structural analysis and integrated aerodynamic-structure-control design. The need for multiple views of data for different modules also led to the development of a number of sophisticated data-base management systems. For CST to play a role in the future development of structures technology and in the multidisciplinary design of future flight vehicles, major advances and computational tools are needed in a number of key areas.

  18. Computational Methods for Protein Structure Prediction and Modeling Volume 2: Structure Prediction

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume 2 of this two-volume sequence focuses on protein structure prediction and includes protein threading, De novo methods, applications to membrane proteins and protein complexes, structure-based drug design, as well as structure prediction as a systems problem. A series of appendices review the biological and chemical basics related to protein structure, computer science for structural informatics, and prerequisite mathematics and statistics.

  19. Connecting Protein Structure to Intermolecular Interactions: A Computer Modeling Laboratory

    Science.gov (United States)

    Abualia, Mohammed; Schroeder, Lianne; Garcia, Megan; Daubenmire, Patrick L.; Wink, Donald J.; Clark, Ginevra A.

    2016-01-01

    An understanding of protein folding relies on a solid foundation of a number of critical chemical concepts, such as molecular structure, intra-/intermolecular interactions, and relating structure to function. Recent reports show that students struggle on all levels to achieve these understandings and use them in meaningful ways. Further, several…

  20. Chemical Structure Identification in Metabolomics: Computational Modeling of Experimental Features

    Directory of Open Access Journals (Sweden)

    Lochana C Menikarachchi

    2013-02-01

    Full Text Available The identification of compounds in complex mixtures remains challenging despite recent advances in analytical techniques. At present, no single method can detect and quantify the vast array of compounds that might be of potential interest in metabolomics studies. High performance liquid chromatography/mass spectrometry (HPLC/MS is often considered the analytical method of choice for analysis of biofluids. The positive identification of an unknown involves matching at least two orthogonal HPLC/MS measurements (exact mass, retention index, drift time etc. against an authentic standard. However, due to the limited availability of authentic standards, an alternative approach involves matching known and measured features of the unknown compound with computationally predicted features for a set of candidate compounds downloaded from a chemical database. Computationally predicted features include retention index, ECOM50 (energy required to decompose 50% of a selected precursor ion in a collision induced dissociation cell, drift time, whether the unknown compound is biological or synthetic and a collision induced dissociation (CID spectrum. Computational predictions are used to filter the initial “bin” of candidate compounds. The final output is a ranked list of candidates that best match the known and measured features. In this mini review, we discuss cheminformatics methods underlying this database search-filter identification approach.

  1. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  2. Computational Methods for Protein Structure Prediction and Modeling Volume 1: Basic Characterization

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume one of this two volume sequence focuses on the basic characterization of known protein structures as well as structure prediction from protein sequence information. The 11 chapters provide an overview of the field, covering key topics in modeling, force fields, classification, computational methods, and struture prediction. Each chapter is a self contained review designed to cover (1) definition of the problem and an historical perspective, (2) mathematical or computational formulation of the problem, (3) computational methods and algorithms, (4) performance results, (5) existing software packages, and (6) strengths, pitfalls, challenges, and future research directions.

  3. Structural characterisation of medically relevant protein assemblies by integrating mass spectrometry with computational modelling.

    Science.gov (United States)

    Politis, Argyris; Schmidt, Carla

    2018-03-20

    Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Integrating aerodynamic surface modeling for computational fluid dynamics with computer aided structural analysis, design, and manufacturing

    Science.gov (United States)

    Thorp, Scott A.

    1992-01-01

    This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.

  5. Computational Methods for Coupled Fluid-Structure-Electromagnetic Interaction Models with Applications to Biomechanics

    Directory of Open Access Journals (Sweden)

    Felix Mihai

    2015-01-01

    Full Text Available Multiphysics problems arise naturally in several engineering and medical applications which often require the solution to coupled processes, which is still a challenging problem in computational sciences and engineering. Some examples include blood flow through an arterial wall and magnetic targeted drug delivery systems. For these, geometric changes may lead to a transient phase in which the structure, flow field, and electromagnetic field interact in a highly nonlinear fashion. In this paper, we consider the computational modeling and simulation of a biomedical application, which concerns the fluid-structure-electromagnetic interaction in the magnetic targeted drug delivery process. Our study indicates that the strong magnetic fields, which aid in targeted drug delivery, can impact not only fluid (blood circulation but also the displacement of arterial walls. A major contribution of this paper is modeling the interactions between these three components, which previously received little to no attention in the scientific and engineering community.

  6. Computationally Efficient Modelling of Dynamic Soil-Structure Interaction of Offshore Wind Turbines on Gravity Footings

    DEFF Research Database (Denmark)

    Damgaard, Mads; Andersen, Lars Vabbersgaard; Ibsen, Lars Bo

    2014-01-01

    The formulation and quality of a computationally efficient model of offshore wind turbine surface foundations is examined. The aim is to establish a model, workable in the frequency and time domain, that can be applied in aeroelastic codes for fast and reliable evaluation of the dynamic structural...... to wave propagating in the subsoil–even for soil stratifications with low cut-in frequencies. In this regard, utilising discrete second-order models for the physical interpretation of a rational filter puts special demands on the Newmark β-scheme, where the time integration in most cases only provides...

  7. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  8. A combined computational and structural model of the full-length human prolactin receptor

    Science.gov (United States)

    Bugge, Katrine; Papaleo, Elena; Haxholm, Gitte W.; Hopper, Jonathan T. S.; Robinson, Carol V.; Olsen, Johan G.; Lindorff-Larsen, Kresten; Kragelund, Birthe B.

    2016-05-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg.

  9. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  10. Photonic band structure computations.

    Science.gov (United States)

    Hermann, D; Frank, M; Busch, K; Wolfle, P

    2001-01-29

    We introduce a novel algorithm for band structure computations based on multigrid methods. In addition, we demonstrate how the results of these band structure calculations may be used to compute group velocities and effective photon masses. The results are of direct relevance to studies of pulse propagation in such materials.

  11. Structure and Thermodynamics of Carbon Dioxide Sorption in Silica Pores from Experiments and Computer Models

    Science.gov (United States)

    Vlcek, L.; Rother, G.; Chialvo, A.; Cole, D. R.

    2011-12-01

    Injection of CO2 into geologic formations has been proposed as a key element to reduce the impact of greenhouse gases emissions. Quantitative understanding of CO2 adsorption in porous mineral environments at thermodynamic conditions relevant to proposed sequestration sites is thus a prerequisite for the assessment of their viability. In this study we use a combination of neutron scattering, adsorption experiments, and computer modeling to investigate the thermodynamics of near-critical carbon dioxide in the pores of SiO2 aerogel, which serves as a model of a high-porosity reservoir rock. Small angle neutron scattering (SANS) experiments provide input for the optimization of the computer model of the aerogel matrix, and also serve as a sensitive probe of local density changes of confined CO2 as a function of external pressure. Additional details of the aerogel basic building blocks and SiO2 surface are derived from TEM images. An independent source of global adsorption data is obtained from gravimetric experiments. The structural and thermodynamic aspects of CO2 sorption are linked using computer simulations, which include the application of the optimized diffusion limited cluster-cluster aggregation algorithm (DLCA), classical density functional theory (DFT) modeling of large-scale CO2 density profiles, and molecular dynamics simulations of the details of interactions between CO2 molecules and the amorphous silica surfaces. This integrated approach allows us to span scales ranging from 1Å to 1μm, as well as to infer the detailed structure of silica threads forming the framework of the silica matrix.

  12. Discrete computational structures

    CERN Document Server

    Korfhage, Robert R

    1974-01-01

    Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize

  13. Computational modeling of RNA 3D structures, with the aid of experimental restraints

    Science.gov (United States)

    Magnus, Marcin; Matelska, Dorota; Łach, Grzegorz; Chojnowski, Grzegorz; Boniecki, Michal J; Purta, Elzbieta; Dawson, Wayne; Dunin-Horkawicz, Stanislaw; Bujnicki, Janusz M

    2014-01-01

    In addition to mRNAs whose primary function is transmission of genetic information from DNA to proteins, numerous other classes of RNA molecules exist, which are involved in a variety of functions, such as catalyzing biochemical reactions or performing regulatory roles. In analogy to proteins, the function of RNAs depends on their structure and dynamics, which are largely determined by the ribonucleotide sequence. Experimental determination of high-resolution RNA structures is both laborious and difficult, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that simulate either the physical process of RNA structure formation (“Greek science” approach) or utilize information derived from known structures of other RNA molecules (“Babylonian science” approach). All computational methods suffer from various limitations that make them generally unreliable for structure prediction of long RNA sequences. However, in many cases, the limitations of computational and experimental methods can be overcome by combining these two complementary approaches with each other. In this work, we review computational approaches for RNA structure prediction, with emphasis on implementations (particular programs) that can utilize restraints derived from experimental analyses. We also list experimental approaches, whose results can be relatively easily used by computational methods. Finally, we describe case studies where computational and experimental analyses were successfully combined to determine RNA structures that would remain out of reach for each of these approaches applied separately. PMID:24785264

  14. Methods, Computational Platform, Verification, and Application of Earthquake-Soil-Structure-Interaction Modeling and Simulation

    Science.gov (United States)

    Tafazzoli, Nima

    Seismic response of soil-structure systems has attracted significant attention for a long time. This is quite understandable with the size and the complexity of soil-structure systems. The focus of three important aspects of ESSI modeling could be on consistent following of input seismic energy and a number of energy dissipation mechanisms within the system, numerical techniques used to simulate dynamics of ESSI, and influence of uncertainty of ESSI simulations. This dissertation is a contribution to development of one such tool called ESSI Simulator. The work is being done on extensive verified and validated suite for ESSI Simulator. Verification and validation are important for high fidelity numerical predictions of behavior of complex systems. This simulator uses finite element method as a numerical tool to obtain solutions for large class of engineering problems such as liquefaction, earthquake-soil-structure-interaction, site effect, piles, pile group, probabilistic plasticity, stochastic elastic-plastic FEM, and detailed large scale parallel models. Response of full three-dimensional soil-structure-interaction simulation of complex structures is evaluated under the 3D wave propagation. Domain-Reduction-Method is used for applying the forces as a two-step procedure for dynamic analysis with the goal of reducing the large size computational domain. The issue of damping of the waves at the boundary of the finite element models is studied using different damping patterns. This is used at the layer of elements outside of the Domain-Reduction-Method zone in order to absorb the residual waves coming out of the boundary layer due to structural excitation. Extensive parametric study is done on dynamic soil-structure-interaction of a complex system and results of different cases in terms of soil strength and foundation embedment are compared. High efficiency set of constitutive models in terms of computational time are developed and implemented in ESSI Simulator

  15. Mathematical structures for computer graphics

    CERN Document Server

    Janke, Steven J

    2014-01-01

    A comprehensive exploration of the mathematics behind the modeling and rendering of computer graphics scenes Mathematical Structures for Computer Graphics presents an accessible and intuitive approach to the mathematical ideas and techniques necessary for two- and three-dimensional computer graphics. Focusing on the significant mathematical results, the book establishes key algorithms used to build complex graphics scenes. Written for readers with various levels of mathematical background, the book develops a solid foundation for graphics techniques and fills in relevant grap

  16. Solar structure without computers

    International Nuclear Information System (INIS)

    Clayton, D.D.

    1986-01-01

    We derive succinctly the equations of solar structure. We first present models of objects in hydrostatic equilibrium that fail as models of the sun in order to illustrate important physical requirements. Then by arguing physically that the pressure gradient can be matched to the simple function dP/dr = -kre/sup( -r//a) 2 , we derive a complete analytic representation of the solar interior in terms of a one-parameter family of models. Two different conditions are then used to select the appropriate value of the parameter specifying the best model within the family: (1) the solar luminosity is equated to the thermonuclear power generated near the center and/or (2) the solar luminosity is equated to the radiative diffusion of energy from a central region. The two methods of selecting the parameter agree to within a few percent. The central conditions of the sun are well calculated by these analytic formulas, all without aid of a computer. This is an original treatment, yielding much the best description of the solar center to be found by methods of differential and integral calculus, rendering it an excellent laboratory for applied calculus

  17. Computational models for structure-hydrophobicity relationships of 4-carboxyl-2,6-dinitrophenyl azo hydroxynaphthalenes.

    Science.gov (United States)

    Idowu, Olakunle S; Adegoke, Olajire A; Idowu, Abiola; Olaniyi, Ajibola A

    2007-01-01

    Some phenyl azo hydroxynaphthalene dyes (e.g., sunset yellow) are certified as approved colorants for food, cosmetics, and drug formulations. The hydrophobicity of 4 newly synthesized azo dyes of the phenyl azo hydroxynaphthalene class was investigated, as a training set, with the goal of developing models for quantitative structure-property relationships (QSPR). Retention behavior of the molecules reversed-phase thin-layer chromatography (RPTLC) was investigated using liquid paraffin-coated silica gel as the stationary phase. Mobile phases consisted of aqueous mixtures of methanol, acetone, and dimethylformamide (DMF). Basic hydrophobicity parameter (Rmw), specific hydrophobic surface area (S), and isocratic chromatographic hydrophobicity index (phio) were computed from the chromatographic data. The hydrophobicity index (Rm) decreased linearly with increasing concentration of organic modifiers. Extrapolated Rmw values obtained by using DMF and acetone differ significantly from the value obtained by using methanol as organic modifier [P dyes and may also play useful roles in computer-assisted molecular discovery of nontoxic azo dyes.

  18. Computing the influences of different Intraocular Pressures on the human eye components using computational fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Navidbakhsh, Mahdi; Sera, Toshihiro; Kudo, Susumu

    2017-01-01

    Intraocular Pressure (IOP) is defined as the pressure of aqueous in the eye. It has been reported that the normal range of IOP should be within the 10-20 mmHg with an average of 15.50 mmHg among the ophthalmologists. Keratoconus is an anti-inflammatory eye disorder that debilitated cornea unable to reserve the normal structure contrary to the IOP in the eye. Consequently, the cornea would bulge outward and invoke a conical shape following by distorted vision. In addition, it is known that any alterations in the structure and composition of the lens and cornea would exceed a change of the eye ball as well as the mechanical and optical properties of the eye. Understanding the precise alteration of the eye components' stresses and deformations due to different IOPs could help elucidate etiology and pathogenesis to develop treatments not only for keratoconus but also for other diseases of the eye. In this study, at three different IOPs, including 10, 20, and 30 mmHg the stresses and deformations of the human eye components were quantified using a Three-Dimensional (3D) computational Fluid-Structure Interaction (FSI) model of the human eye. The results revealed the highest amount of von Mises stress in the bulged region of the cornea with 245 kPa at the IOP of 30 mmHg. The lens was also showed the von Mises stress of 19.38 kPa at the IOPs of 30 mmHg. In addition, by increasing the IOP from 10 to 30 mmHg, the radius of curvature in the cornea and lens was increased accordingly. In contrast, the sclera indicated its highest stress at the IOP of 10 mmHg due to over pressure phenomenon. The variation of IOP illustrated a little influence in the amount of stress as well as the resultant displacement of the optic nerve. These results can be used for understanding the amount of stresses and deformations in the human eye components due to different IOPs as well as for clarifying significant role of IOP on the radius of curvature of the cornea and the lens.

  19. Absorbed dose evaluation based on a computational voxel model incorporating distinct cerebral structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Samia de Freitas; Trindade, Bruno; Campos, Tarcisio P.R. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)]. E-mail: samiabrandao@gmail.com; bmtrindade@yahoo.com; campos@nuclear.ufmg.br

    2007-07-01

    Brain tumors are quite difficult to treat due to the collateral radiation damages produced on the patients. Despite of the improvements in the therapeutics protocols for this kind of tumor, involving surgery and radiotherapy, the failure rate is still extremely high. This fact occurs because tumors can not often be totally removed by surgery since it may produce some type of deficit in the cerebral functions. Radiotherapy is applied after the surgery, and both are palliative treatments. During radiotherapy the brain does not absorb the radiation dose in homogeneous way, because the various density and chemical composition of tissues involved. With the intention of evaluating better the harmful effects caused by radiotherapy it was developed an elaborated cerebral voxel model to be used in computational simulation of the irradiation protocols of brain tumors. This paper presents some structures function of the central nervous system and a detailed cerebral voxel model, created in the SISCODES program, considering meninges, cortex, gray matter, white matter, corpus callosum, limbic system, ventricles, hypophysis, cerebellum, brain stem and spinal cord. The irradiation protocol simulation was running in the MCNP5 code. The model was irradiated with photons beam whose spectrum simulates a linear accelerator of 6 MV. The dosimetric results were exported to SISCODES, which generated the isodose curves for the protocol. The percentage isodose curves in the brain are present in this paper. (author)

  20. A numerical framework for computing steady states of structured population models and their stability.

    Science.gov (United States)

    Mirzaev, Inom; Bortz, David M

    2017-08-01

    Structured population models are a class of general evolution equations which are widely used in the study of biological systems. Many theoretical methods are available for establishing existence and stability of steady states of general evolution equations. However, except for very special cases, finding an analytical form of stationary solutions for evolution equations is a challenging task. In the present paper, we develop a numerical framework for computing approximations to stationary solutions of general evolution equations, which can also be used to produce approximate existence and stability regions for steady states. In particular, we use the Trotter-Kato Theorem to approximate the infinitesimal generator of an evolution equation on a finite dimensional space, which in turn reduces the evolution equation into a system of ordinary differential equations. Consequently, we approximate and study the asymptotic behavior of stationary solutions. We illustrate the convergence of our numerical framework by applying it to a linear Sinko-Streifer structured population model for which the exact form of the steady state is known. To further illustrate the utility of our approach, we apply our framework to nonlinear population balance equation, which is an extension of well-known Smoluchowski coagulation-fragmentation model to biological populations. We also demonstrate that our numerical framework can be used to gain insight about the theoretical stability of the stationary solutions of the evolution equations. Furthermore, the open source Python program that we have developed for our numerical simulations is freely available from our GitHub repository (github.com/MathBioCU).

  1. Computational Modeling of Bloch Surface Waves in One-Dimensional Periodic and Aperiodic Multilayer Structures

    Science.gov (United States)

    Koju, Vijay

    Photonic crystals and their use in exciting Bloch surface waves have received immense attention over the past few decades. This interest is mainly due to their applications in bio-sensing, wave-guiding, and other optical phenomena such as surface field enhanced Raman spectroscopy. Improvement in numerical modeling techniques, state of the art computing resources, and advances in fabrication techniques have also assisted in growing interest in this field. The ability to model photonic crystals computationally has benefited both the theoretical as well as experimental communities. It helps the theoretical physicists in solving complex problems which cannot be solved analytically and helps to acquire useful insights that cannot be obtained otherwise. Experimentalists, on the other hand, can test different variants of their devices by changing device parameters to optimize performance before fabrication. In this dissertation, we develop two commonly used numerical techniques, namely transfer matrix method, and rigorous coupled wave analysis, in C++ and MATLAB, and use two additional software packages, one open-source and another commercial, to model one-dimensional photonic crystals. Different variants of one-dimensional multilayered structures such as perfectly periodic dielectric multilayers, quasicrystals, aperiodic multilayer are modeled, along with one-dimensional photonic crystals with gratings on the top layer. Applications of Bloch surface waves, along with new and novel aperiodic dielectric multilayer structures that support Bloch surface waves are explored in this dissertation. We demonstrate a slow light configuration that makes use of Bloch Surface Waves as an intermediate excitation in a double-prism tunneling configuration. This method is simple compared to the more usual techniques for slowing light using the phenomenon of electromagnetically induced transparency in atomic gases or doped ionic crystals operated at temperatures below 4K. Using a semi

  2. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  3. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  4. Computation of External Quality Factors for RF Structures by Means of Model Order Reduction and a Perturbation Approach

    CERN Document Server

    Flisgen, Thomas; van Rienen, Ursula

    2016-01-01

    External quality factors are significant quantities to describe losses via waveguide ports in radio frequency resonators. The current contribution presents a novel approach to determine external quality factors by means of a two-step procedure: First, a state-space model for the lossless radio frequency structure is generated and its model order is reduced. Subsequently, a perturbation method is applied on the reduced model so that external losses are accounted for. The advantage of this approach results from the fact that the challenges in dealing with lossy systems are shifted to the reduced order model. This significantly saves computational costs. The present paper provides a short overview on existing methods to compute external quality factors. Then, the novel approach is introduced and validated in terms of accuracy and computational time by means of commercial software.

  5. Collision of the glass shards with the eye: A computational fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Biglari, Hasan; Sera, Toshihiro; Kudo, Susumu

    2017-12-27

    The main stream of blunt trauma injuries has been reported to be related to the automobile crashes, sporting activities, and military operations. Glass shards, which can be induced due to car accident, earthquake, gunshot, etc., might collide with the eye and trigger substantial scarring and, consequently, permanently affect the vision. The complications as a result of the collision with the eye and its following injuries on each component of the eye are difficult to be diagnosed. The objective of this study was to employ a Three-Dimensional (3D) computational Fluid-Structure Interaction (FSI) model of the human eye to assess the results of the glass shards collision with the eye. To do this, a rigid steel-based object hit a Smoothed-Particle Hydrodynamics (SPH) glass wall at the velocities of 100, 150, and 200 m/s and, subsequently, the resultant glass shards moved toward the eye. The amount of injury, then, quantified in terms of the stresses and strains. The results revealed the highest amount of stress in the cornea while the lowest one was observed in the vitreous body. It was also found that increasing the speed of the glass shards amplifies the amount of the stress in the components which are located in the central anterior zone of the eye, such as the cornea, aqueous body, and iris. However, regarding those components located in the peripheral/posterior side of the eye, especially the optic nerve, by increasing the amount of velocity a reduction in the stresses was observed and the optic nerve is hardly damaged. These findings have associations not only for understanding the amount of stresses/strains in the eye components at three different velocities, but also for providing preliminary information for the ophthalmologists to have a better diagnosis after glass shards (small objects impact) injuries to the eye. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Potts model based on a Markov process computation solves the community structure problem effectively.

    Science.gov (United States)

    Li, Hui-Jia; Wang, Yong; Wu, Ling-Yun; Zhang, Junhua; Zhang, Xiang-Sun

    2012-07-01

    The Potts model is a powerful tool to uncover community structure in complex networks. Here, we propose a framework to reveal the optimal number of communities and stability of network structure by quantitatively analyzing the dynamics of the Potts model. Specifically we model the community structure detection Potts procedure by a Markov process, which has a clear mathematical explanation. Then we show that the local uniform behavior of spin values across multiple timescales in the representation of the Markov variables could naturally reveal the network's hierarchical community structure. In addition, critical topological information regarding multivariate spin configuration could also be inferred from the spectral signatures of the Markov process. Finally an algorithm is developed to determine fuzzy communities based on the optimal number of communities and the stability across multiple timescales. The effectiveness and efficiency of our algorithm are theoretically analyzed as well as experimentally validated.

  7. Direct methods for limit and shakedown analysis of structures advanced computational algorithms and material modelling

    CERN Document Server

    Pisano, Aurora; Weichert, Dieter

    2015-01-01

    Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that “Direct Methods” and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications.   Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in...

  8. Structural models of randomly packed Tobermorite-like spherical particles: A simple computational approach

    Directory of Open Access Journals (Sweden)

    González-Teresa, R.

    2010-06-01

    Full Text Available In this work, and in order to bring together the atomistic and colloidal viewpoints, we will present a Monte Carlo computational scheme which reproduces the colloidal packing of nano-spherical crystalline tobermorite-like particles. Different Low Density (LD CS- H and High Density (HD C-S-H structures will be developed just by varying the computational packing parameters. Finally, the structures resulting from our computational experiments will be analyzed in terms of their densities, surface areas and their mechanical properties.

    En este trabajo y con el objetivo de conjugar el punto de vista atomístico y coloidal, presentamos un método computacional Monte Carlo que reproduce el empaquetamiento coloidal de nano-partículas esféricas cristalinas de tipo Tobermorita. Variando los parámetros computacionales de empaquetamiento diferentes estructuras tipo Low Density (LD C-S-H y High Density (HD C-S-H han sido creadas. Posteriormente, las estructuras resultantes de nuestros experimentos computacionales han sido analizadas en términos de sus densidades, áreas específicas y propiedades mecánicas.

  9. Coherent structures in granular crystals from experiment and modelling to computation and mathematical analysis

    CERN Document Server

    Chong, Christopher

    2018-01-01

    This book summarizes a number of fundamental developments at the interface of granular crystals and the mathematical and computational analysis of some of their key localized nonlinear wave solutions. The subject presents a blend of the appeal of granular crystals as a prototypical engineering tested for a variety of diverse applications, the novelty in the nonlinear physics of its coherent structures, and the tractability of a series of mathematical and computational techniques to analyse them. While the focus is on principal one-dimensional solutions such as shock waves, traveling waves, and discrete breathers, numerous extensions of the discussed patterns, e.g., in two dimensions, chains with defects, heterogeneous settings, and other recent developments are discussed. The book appeals to researchers in the field, as well as for graduate and advanced undergraduate students. It will be of interest to mathematicians, physicists and engineers alike.

  10. Computational protein structure modeling and analysis of UV-B stress protein in Synechocystis PCC 6803.

    Science.gov (United States)

    Rahman, Md Akhlaqur; Chaturvedi, Navaneet; Sinha, Sukrat; Pandey, Paras Nath; Gupta, Dwijendra Kumar; Sundaram, Shanthy; Tripathi, Ashutosh

    2013-01-01

    This study focuses on Ultra Violet stress (UVS) gene product which is a UV stress induced protein from cyanobacteria, Synechocystis PCC 6803. Three dimensional structural modeling of target UVS protein was carried out by homology modeling method. 3F2I pdb from Nostoc sp. PCC 7120 was selected as a suitable template protein structure. Ultimately, the detection of active binding regions was carried out for characterization of functional sites in modeled UV-B stress protein. The top five probable ligand binding sites were predicted and the common binding residues between target and template protein was analyzed. It has been validated for the first time that modeled UVS protein structure from Synechocystis PCC 6803 was structurally and functionally similar to well characterized UVS protein of another cyanobacterial species, Nostoc sp PCC 7120 because of having same structural motif and fold with similar protein topology and function. Investigations revealed that UVS protein from Synechocystis sp. might play significant role during ultraviolet resistance. Thus, it could be a potential biological source for remediation for UV induced stress.

  11. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  12. STRUCTURAL MODELLING

    Directory of Open Access Journals (Sweden)

    Tea Ya. Danelyan

    2014-01-01

    Full Text Available The article states the general principles of structural modeling in aspect of the theory of systems and gives the interrelation with other types of modeling to adjust them to the main directions of modeling. Mathematical methods of structural modeling, in particular method of expert evaluations are considered.

  13. Computational Modeling with Fluid-Structure Interaction of the Severe M1 Stenosis Before and After Stenting

    OpenAIRE

    Park, Soonchan; Lee, Sang-Wook; Lim, Ok Kyun; Min, Inki; Nguyen, Minhtuan; Ko, Young Bae; Yoon, Kyunghwan; Suh, Dae Chul

    2013-01-01

    Purpose Image-based computational models with fluid-structure interaction (FSI) can be used to perform plaque mechanical analysis in intracranial artery stenosis. We described a process in FSI study applied to symptomatic severe intracranial (M1) stenosis before and after stenting. Materials and Methods Reconstructed 3D angiography in STL format was transferred to Magics for smoothing of vessel surface and trimming of branch vessels and to HyperMesh for generating tetra volume mesh from trian...

  14. Computational modelling of the Li effects on the electronic structure of porous silicon

    Science.gov (United States)

    Gomez-Herrera, María Lucero; Miranda Durán, Álvaro; Trejo Baños, Alejandro; Cruz Irisson, Miguel

    This work analyses the effects of Li impurities on the electronic structure of pSi by means of the density functional theory with the generalized gradient approximation and the supercell scheme. The porous structures were modeled by removing atoms in the [001] direction of an otherwise perfect Si crystal. All surface dangling bonds were saturated with H atoms. To model the Li impurities some H atoms are replaced with Li atoms at the surface. Results show additional bands around the Fermi level with the insertion of a single Li atom on the pore surface, which suggests a trap-like state of localized charge. With increasing concentration of surface Li the band gap gradually decreases approaching to a metallic behavior. This results could be important to the application of pSi in Li-ion batteries This work was partially supported by CONACYT infrastructure project 252749.

  15. BOOK REVIEW: Computational Atomic Structure

    Science.gov (United States)

    Post, Douglass E.

    1998-02-01

    introduction to atomic structure. It covers single and many electron systems, how to set up a basis set of wavefunctions for a many electron system, LS coupling, single and multi-electron Hamiltonians, the elementary Hartree-Fock approximation and how variational methods are used to determine the ground state energy and wavefunctions. The computational methods used in the codes are outlined and there are exercises at the end of each chapter. For a number of candidate atomic configurations, explicit examples are given that illustrate the physics, the approximations and the computational methods involved, and which provide the reader with the opportunity to check that he is using the suite of codes correctly. Relativistic effects are covered as perturbations with Breit-Pauli Hamiltonians. Isotope and hyperfine level splitting are also covered. A summary chapter covers allowed and forbidden bound-bound transitions. It describes how to set up the matrix elements for transition operators, and the determination of selection rules and computational aspects of the methods for allowed and forbidden lines. The last chapter provides a brief introduction to continuum transitions, including how to compute the necessary wavefunctions to calculate photoionization or photodetachment and autoionization processes. Several appendices provide a summary of angular momentum theory, an introduction to the Dirac and Breit-Pauli theory for relativistic processes, and a description of the input parameters needed to run the programs. In summary, the book is an almost essential guide to anyone planning to use the Multi-Configuration Hartree-Fock suite of codes. With this guide, even someone not thoroughly familiar with the details of the subject or the codes should be able to use them to obtain energy levels, wavefunctions and transition rates for any atomic system of interest. This book serves as a model example for the general computational physics community of how to document an important suite of

  16. Computational models to assign biopharmaceutics drug disposition classification from molecular structure.

    Science.gov (United States)

    Khandelwal, Akash; Bahadduri, Praveen M; Chang, Cheng; Polli, James E; Swaan, Peter W; Ekins, Sean

    2007-12-01

    We applied in silico methods to automatically classify drugs according to the Biopharmaceutics Drug Disposition Classification System (BDDCS). Models were developed using machine learning methods including recursive partitioning (RP), random forest (RF) and support vector machine (SVM) algorithms with ChemDraw, clogP, polar surface area, VolSurf and MolConnZ descriptors. The dataset consisted of 165 training and 56 test set molecules. RF model 3, RP model 1, and SVM model 1 can correctly predict 73.1, 63.6 and 78.6% test compounds in classes 1, 2 and 3, respectively. Both RP and SVM models can be used for class 4 prediction. The inclusion of consensus analysis resulted in improved test set predictions for class 2 and 4 drugs. The models can be used to predict BDDCS class for new compounds from molecular structure using readily available molecular descriptors and software, representing an area where in silico approaches could aid the pharmaceutical industry in speeding drugs to the patient and reducing costs. This could have significant applications in drug discovery to identify molecules that may have future developability issues.

  17. AMPLE : a cluster-and-truncate approach to solve the crystal structures of small proteins using rapidly computed ab initio models

    OpenAIRE

    Bibby, Jaclyn; Keegan, Ronan M.; Mayans, Olga; Winn, Martyn D.; Rigden, Daniel J.

    2012-01-01

    Protein ab initio models predicted from sequence data alone can enable the elucidation of crystal structures by molecular replacement. However, the calculation of such ab initio models is typically computationally expensive. Here, a computational pipeline based on the clustering and truncation of cheaply obtained ab initio models for the preparation of structure ensembles is described. Clustering is used to select models and to quantitatively predict their local accuracy, allowing rational tr...

  18. Mobile computing acceptance factors in the healthcare industry: a structural equation model.

    Science.gov (United States)

    Wu, Jen-Her; Wang, Shu-Ching; Lin, Li-Min

    2007-01-01

    This paper presents a revised technology acceptance model to examine what determines mobile healthcare systems (MHS) acceptance by healthcare professionals. Conformation factor analysis was performed to test the reliability and validity of the measurement model. The structural equation modeling technique was used to evaluate the causal model. The results indicated that compatibility, perceived usefulness and perceived ease of use significantly affected healthcare professional behavioral intent. MHS self-efficacy had strong indirect impact on healthcare professional behavioral intent through the mediators of perceived usefulness and perceived ease of use. Yet, the hypotheses for technical support and training effects on the perceived usefulness and perceived ease of use were not supported. This paper provides initial insights into factors that are likely to be significant antecedents of planning and implementing mobile healthcare to enhance professionals' MHS acceptance. The proposed model variables explained 70% of the variance in behavioral intention to use MHS; further study is needed to explore extra significant antecedents of new IT/IS acceptance for mobile healthcare. Such as privacy and security issue, system and information quality, limitations of mobile devices; the above may be other interesting factors for implementing mobile healthcare and could be conducted by qualitative research.

  19. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  20. Symmetry structure in discrete models of biochemical systems: natural subsystems and the weak control hierarchy in a new model of computation driven by interactions.

    Science.gov (United States)

    Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J

    2015-07-28

    Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.

  1. Computational modelling of polymers

    Science.gov (United States)

    Celarier, Edward A.

    1991-01-01

    Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.

  2. Monte Carlo thermodynamic and structural properties of the TIP4P water model: dependence on the computational conditions

    Directory of Open Access Journals (Sweden)

    João Manuel Marques Cordeiro

    1998-11-01

    Full Text Available Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r and g(O/H(r do not depend on them.

  3. A computational model for domain structure evolution of nematic liquid crystal elastomers

    Science.gov (United States)

    Wang, Hongbo; Oates, William S.

    2009-03-01

    Liquid crystal elastomers combine both liquid crystals and polymers, which gives rise to many fascinating properties, such as unparalleled elastic anisotropy, photo-mechanics and flexoelectric behavior. The potential applications for these materials widely range from wings for micro-air vehicles to reversible adhesion skins for mobile climbing robots. However, significant challenges remain to understand the rich range of microstructure evolution exibited by these materials. This paper presents a model for domain structure evolution within the Ginzburg-Landau framework. The free energy consists of two parts: the distortion energy introduced by Ericksen [1] and a Landau energy. The finite element method has been implemented to solve the governing equations developed. Numerical examples are given to demonstrate the microstructure evolution.

  4. Computational modeling with fluid-structure interaction of the severe m1 stenosis before and after stenting.

    Science.gov (United States)

    Park, Soonchan; Lee, Sang-Wook; Lim, Ok Kyun; Min, Inki; Nguyen, Minhtuan; Ko, Young Bae; Yoon, Kyunghwan; Suh, Dae Chul

    2013-02-01

    Image-based computational models with fluid-structure interaction (FSI) can be used to perform plaque mechanical analysis in intracranial artery stenosis. We described a process in FSI study applied to symptomatic severe intracranial (M1) stenosis before and after stenting. Reconstructed 3D angiography in STL format was transferred to Magics for smoothing of vessel surface and trimming of branch vessels and to HyperMesh for generating tetra volume mesh from triangular surface-meshed 3D angiogram. Computational analysis of blood flow in the blood vessels was performed using the commercial finite element software ADINA Ver 8.5. The distribution of wall shear stress (WSS), peak velocity and pressure was analyzed before and after intracranial stenting. The wall shear stress distributions from Computational fluid dynamics (CFD) simulation with rigid wall assumption as well as FSI simulation before and after stenting could be compared. The difference of WSS between rigid wall and compliant wall model both in pre- and post-stent case is only minor except at the stenosis region. These WSS values were greatly reduced after stenting to 15~20 Pa at systole and 3~5 Pa at end-diastole in CFD simulation, which are similar in FSI simulations. Our study revealed that FSI simulation before and after intracranial stenting was feasible despite of limited vessel wall dimension and could reveal change of WSS as well as flow velocity and wall pressure.

  5. Computational modeling in nanomedicine: prediction of multiple antibacterial profiles of nanoparticles using a quantitative structure-activity relationship perturbation model.

    Science.gov (United States)

    Speck-Planche, Alejandro; Kleandrova, Valeria V; Luan, Feng; Cordeiro, Maria Natália D S

    2015-01-01

    We introduce the first quantitative structure-activity relationship (QSAR) perturbation model for probing multiple antibacterial profiles of nanoparticles (NPs) under diverse experimental conditions. The dataset is based on 300 nanoparticles containing dissimilar chemical compositions, sizes, shapes and surface coatings. In general terms, the NPs were tested against different bacteria, by considering several measures of antibacterial activity and diverse assay times. The QSAR perturbation model was created from 69,231 nanoparticle-nanoparticle (NP-NP) pairs, which were randomly generated using a recently reported perturbation theory approach. The model displayed an accuracy rate of approximately 98% for classifying NPs as active or inactive, and a new copper-silver nanoalloy was correctly predicted by this model with consensus accuracy of 77.73%. Our QSAR perturbation model can be used as an efficacious tool for the virtual screening of antibacterial nanomaterials.

  6. Slepian modeling as a computational method in random vibration analysis of hysteretic structures

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1999-01-01

    white noise. The computation time for obtaining estimates of relevant statistics on a given accuracy level is decreased by factors of one ormore orders of size as compared to the computation time needed for direct elasto-plastic displacementresponse simulations by vectorial Markov sequence techniques....... Moreover the Slepian method gives valuablephysical insight about the details of the plastic displacement development by time.The paper gives a general self-contained mathematical description of the Slepian method based plasticdisplacement analysis of Gaussian white noise excited EPOs. Experiences...

  7. Tensegrity structures - Computational and experimental tensegrity mechanics

    Science.gov (United States)

    Kuhl, Detlef; Lim, Yi Chung; Long, David S.

    2017-07-01

    The present paper deals with tensegrity structures. We review the definition of tensegrity structures, and describe both experimental and computational form finding methods. Also described are the numerical methods for the simulation of prestress induced stiffness, and the static and dynamic structural analyses. Furthermore, we present laboratory models and measurement methods for identifying the realized geometry and prestress state. Finally, computationally and experimentally obtained geometries and prestress states are compared, a representative realization of a real world tensegrity tower is shown and the modeling of biological cells as tensegrity structures is adressed.

  8. Integrating solid-state NMR and computational modeling to investigate the structure and dynamics of membrane-associated ghrelin.

    Directory of Open Access Journals (Sweden)

    Gerrit Vortmeier

    Full Text Available The peptide hormone ghrelin activates the growth hormone secretagogue receptor 1a, also known as the ghrelin receptor. This 28-residue peptide is acylated at Ser3 and is the only peptide hormone in the human body that is lipid-modified by an octanoyl group. Little is known about the structure and dynamics of membrane-associated ghrelin. We carried out solid-state NMR studies of ghrelin in lipid vesicles, followed by computational modeling of the peptide using Rosetta. Isotropic chemical shift data of isotopically labeled ghrelin provide information about the peptide's secondary structure. Spin diffusion experiments indicate that ghrelin binds to membranes via its lipidated Ser3. Further, Phe4, as well as electrostatics involving the peptide's positively charged residues and lipid polar headgroups, contribute to the binding energy. Other than the lipid anchor, ghrelin is highly flexible and mobile at the membrane surface. This observation is supported by our predicted model ensemble, which is in good agreement with experimentally determined chemical shifts. In the final ensemble of models, residues 8-17 form an α-helix, while residues 21-23 and 26-27 often adopt a polyproline II helical conformation. These helices appear to assist the peptide in forming an amphipathic conformation so that it can bind to the membrane.

  9. Integrating Solid-State NMR and Computational Modeling to Investigate the Structure and Dynamics of Membrane-Associated Ghrelin

    Science.gov (United States)

    Els-Heindl, Sylvia; Chollet, Constance; Scheidt, Holger A.; Beck-Sickinger, Annette G.; Meiler, Jens; Huster, Daniel

    2015-01-01

    The peptide hormone ghrelin activates the growth hormone secretagogue receptor 1a, also known as the ghrelin receptor. This 28-residue peptide is acylated at Ser3 and is the only peptide hormone in the human body that is lipid-modified by an octanoyl group. Little is known about the structure and dynamics of membrane-associated ghrelin. We carried out solid-state NMR studies of ghrelin in lipid vesicles, followed by computational modeling of the peptide using Rosetta. Isotropic chemical shift data of isotopically labeled ghrelin provide information about the peptide’s secondary structure. Spin diffusion experiments indicate that ghrelin binds to membranes via its lipidated Ser3. Further, Phe4, as well as electrostatics involving the peptide’s positively charged residues and lipid polar headgroups, contribute to the binding energy. Other than the lipid anchor, ghrelin is highly flexible and mobile at the membrane surface. This observation is supported by our predicted model ensemble, which is in good agreement with experimentally determined chemical shifts. In the final ensemble of models, residues 8–17 form an α-helix, while residues 21–23 and 26–27 often adopt a polyproline II helical conformation. These helices appear to assist the peptide in forming an amphipathic conformation so that it can bind to the membrane. PMID:25803439

  10. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  11. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    Science.gov (United States)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  12. Applying a Computational Fluid Dynamics model to understand flow structures in a large river: the Rio Paraná

    Science.gov (United States)

    Sandbach, S. D.; Hardy, R. J.; Lane, S. N.; Ashworth, P. J.; Parsons, D. R.

    2010-12-01

    Our understanding of large rivers is limited due to the difficulties in obtaining field data at these large scales. Data rich results may be obtained using computational fluid dynamic (CFD) models permitting the investigation of detailed flow patterns that would otherwise not be available. However, the application of these models to large rivers is not without its own complications and has yet to be fully developed. This is the result of two limiting factors, our inability; i) to design numerically stable meshes for complex topographies at these spatial resolutions; and; ii) to collect high resolution data appropriate for the boundary conditions of the numerical scheme. Here, we demonstrate a five-term mass-flux scaling algorithm (MFSA) for including bed topography in a very large river, where the discretised form of the mass and momentum equations are modified using a numerical blockage. Converged solutions were obtained using the Reynolds-averaged Navier stokes (RANS) equations modelling turbulence with a κ-ɛ RNG turbulence model. The boundary conditions were supplied from a field investigation of the Rio Paraná upstream of the Paraguay-Paraná confluence. A 38 km long reach was investigated where topographic and velocity data was collected using an acoustic Doppler current profiler (aDcp) and a single beam echo sounder. The model was validated against the aDcp data and in general showed good agreement. The model was then used to explore the impacts of roughness height upon key characteristics of the 3D flow field in large rivers. The results demonstrate the importance of topographic forcing on determining flow structures including the detection of large helical flow structures.

  13. A combined computational and structural model of the full-length human prolactin receptor

    DEFF Research Database (Denmark)

    Bugge, Katrine Østergaard; Papaleo, Elena; Haxholm, Gitte Wolfsberg

    2016-01-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for...... 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg....

  14. Data structures, computer graphics, and pattern recognition

    CERN Document Server

    Klinger, A; Kunii, T L

    1977-01-01

    Data Structures, Computer Graphics, and Pattern Recognition focuses on the computer graphics and pattern recognition applications of data structures methodology.This book presents design related principles and research aspects of the computer graphics, system design, data management, and pattern recognition tasks. The topics include the data structure design, concise structuring of geometric data for computer aided design, and data structures for pattern recognition algorithms. The survey of data structures for computer graphics systems, application of relational data structures in computer gr

  15. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  16. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  17. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  18. Phase-contrast computed tomography for quantification of structural changes in lungs of asthma mouse models of different severity

    Energy Technology Data Exchange (ETDEWEB)

    Dullin, Christian, E-mail: christian.dullin@med.uni-goettingen.de [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Larsson, Emanuel [Elettra-Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, Basovizza (Trieste) 34149 (Italy); University of Trieste, Trieste (Italy); Linkoeping University, SE-581 83 Linkoeping (Sweden); Tromba, Giuliana [Elettra-Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, Basovizza (Trieste) 34149 (Italy); Markus, Andrea M. [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Alves, Frauke [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Max Planck Institut for Experimental Medicine, Hermann-Rein-Strasse 3, Goettingen, Lower Saxony 37075 (Germany)

    2015-06-17

    Synchrotron inline phase-contrast computed tomography in combination with single-distance phase retrieval enables quantification of morphological alterations in lungs of mice with mild and severe experimental allergic airways disease in comparison with healthy controls. Lung imaging in mouse disease models is crucial for the assessment of the severity of airway disease but remains challenging due to the small size and the high porosity of the organ. Synchrotron inline free-propagation phase-contrast computed tomography (CT) with its intrinsic high soft-tissue contrast provides the necessary sensitivity and spatial resolution to analyse the mouse lung structure in great detail. Here, this technique has been applied in combination with single-distance phase retrieval to quantify alterations of the lung structure in experimental asthma mouse models of different severity. In order to mimic an in vivo situation as close as possible, the lungs were inflated with air at a constant physiological pressure. Entire mice were embedded in agarose gel and imaged using inline free-propagation phase-contrast CT at the SYRMEP beamline (Synchrotron Light Source, ‘Elettra’, Trieste, Italy). The quantification of the obtained phase-contrast CT data sets revealed an increasing lung soft-tissue content in mice correlating with the degree of the severity of experimental allergic airways disease. In this way, it was possible to successfully discriminate between healthy controls and mice with either mild or severe allergic airway disease. It is believed that this approach may have the potential to evaluate the efficacy of novel therapeutic strategies that target airway remodelling processes in asthma.

  19. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    This report summarizes the final product developed for the US DOE Small Business Innovation Research (SBIR) Phase II grant made to Engineering Mechanics Corporation of Columbus (Emc2) between April 16, 2014 and August 31, 2016 titled ‘Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures’. Many US companies have moved fabrication and production facilities off shore because of cheaper labor costs. A key aspect in bringing these jobs back to the US is the use of technology to render US-made fabrications more cost-efficient overall with higher quality. One significant advantage that has emerged in the US over the last two decades is the use of virtual design for fabrication of small and large structures in weld fabrication industries. Industries that use virtual design and analysis tools have reduced material part size, developed environmentally-friendly fabrication processes, improved product quality and performance, and reduced manufacturing costs. Indeed, Caterpillar Inc. (CAT), one of the partners in this effort, continues to have a large fabrication presence in the US because of the use of weld fabrication modeling to optimize fabrications by controlling weld residual stresses and distortions and improving fatigue, corrosion, and fracture performance. This report describes Emc2’s DOE SBIR Phase II final results to extend an existing, state-of-the-art software code, Virtual Fabrication Technology (VFT®), currently used to design and model large welded structures prior to fabrication - to a broader range of products with widespread applications for small and medium-sized enterprises (SMEs). VFT® helps control distortion, can minimize and/or control residual stresses, control welding microstructure, and pre-determine welding parameters such as weld-sequencing, pre-bending, thermal-tensioning, etc. VFT® uses material properties, consumable properties, etc. as inputs

  20. Computational Material Modeling of Hydrated Cement Paste Calcium Silicate Hydrate (C-S-H) Chemistry Structure - Influence of Magnesium Exchange on Mechanical Stiffness: C-S-H Jennite

    Science.gov (United States)

    2015-04-27

    hydrated cement paste constituent - calcium silicate hydrate (C-S-H) based on its material chemistry structure are studied following a molecular dynamics...2015 Approved for public release; distribution is unlimited. Computational Material Modeling of Hydrated Cement Paste Calcium Silicate Hydrate (C-S-H...1601 East Market Street Greensboro, NC 27411 -0001 ABSTRACT Computational Material Modeling of Hydrated Cement Paste Calcium Silicate Hydrate (C-S-H

  1. A structural equation modeling approach for the adoption of cloud computing to enhance the Malaysian healthcare sector.

    Science.gov (United States)

    Ratnam, Kalai Anand; Dominic, P D D; Ramayah, T

    2014-08-01

    The investments and costs of infrastructure, communication, medical-related equipments, and software within the global healthcare ecosystem portray a rather significant increase. The emergence of this proliferation is then expected to grow. As a result, information and cross-system communication became challenging due to the detached independent systems and subsystems which are not connected. The overall model fit expending over a sample size of 320 were tested with structural equation modelling (SEM) using AMOS 20.0 as the modelling tool. SPSS 20.0 is used to analyse the descriptive statistics and dimension reliability. Results of the study show that system utilisation and system impact dimension influences the overall level of services of the healthcare providers. In addition to that, the findings also suggest that systems integration and security plays a pivotal role for IT resources in healthcare organisations. Through this study, a basis for investigation on the need to improvise the Malaysian healthcare ecosystem and the introduction of a cloud computing platform to host the national healthcare information exchange has been successfully established.

  2. Methods for Creating and Animating a Computer Model Depicting the Structure and Function of the Sarcoplasmic Reticulum Calcium ATPase Enzyme.

    Science.gov (United States)

    Chen, Alice Y.; McKee, Nancy

    1999-01-01

    Describes the developmental process used to visualize the calcium ATPase enzyme of the sarcoplasmic reticulum which involves evaluating scientific information, consulting scientists, model making, storyboarding, and creating and editing in a computer medium. (Author/CCM)

  3. Architecture-based multiscale computational modeling of plant cell wall mechanics to examine the hydrogen-bonding hypothesis of the cell wall network structure model.

    Science.gov (United States)

    Yi, Hojae; Puri, Virendra M

    2012-11-01

    A primary plant cell wall network was computationally modeled using the finite element approach to study the hypothesis of hemicellulose (HC) tethering with the cellulose microfibrils (CMFs) as one of the major load-bearing mechanisms of the growing cell wall. A computational primary cell wall network fragment (10 × 10 μm) comprising typical compositions and properties of CMFs and HC was modeled with well-aligned CMFs. The tethering of HC to CMFs is modeled in accordance with the strength of the hydrogen bonding by implementing a specific load-bearing connection (i.e. the joint element). The introduction of the CMF-HC interaction to the computational cell wall network model is a key to the quantitative examination of the mechanical consequences of cell wall structure models, including the tethering HC model. When the cell wall network models with and without joint elements were compared, the hydrogen bond exhibited a significant contribution to the overall stiffness of the cell wall network fragment. When the cell wall network model was stretched 1% in the transverse direction, the tethering of CMF-HC via hydrogen bonds was not strong enough to maintain its integrity. When the cell wall network model was stretched 1% in the longitudinal direction, the tethering provided comparable strength to maintain its integrity. This substantial anisotropy suggests that the HC tethering with hydrogen bonds alone does not manifest sufficient energy to maintain the integrity of the cell wall during its growth (i.e. other mechanisms are present to ensure the cell wall shape).

  4. A Computational Model to Assess Poststenting Wall Stresses Dependence on Plaque Structure and Stenosis Severity in Coronary Artery

    Directory of Open Access Journals (Sweden)

    Zuned Hajiali

    2014-01-01

    Full Text Available The current study presents computational models to investigate the poststenting hemodynamic stresses and internal stresses over/within the diseased walls of coronary arteries which are in different states of atherosclerotic plaque. The finite element method is applied to build the axisymmetric models which include the plaque, arterial wall, and stent struts. The study takes into account the mechanical effects of the opening pressure and its association with the plaque severity and the morphology. The wall shear stresses and the von Mises stresses within the stented coronary arteries show their strong dependence on the plaque structure, particularly the fibrous cap thickness. Higher stresses occur in severely stenosed coronaries with a thinner fibrous cap. Large stress concentrations around the stent struts cause injury or damage to the vessel wall which is linked to the mechanism of restenosis. The in-stent restenosis rate is also highly dependent on the opening pressure, to the extent that stenosed artery is expanded, and geometry of the stent struts. The present study demonstrates, for the first time, that the restenosis is to be viewed as a consequence of biomechanical design of a stent repeating unit, the opening pressure, and the severity and morphology of the plaque.

  5. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  6. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  7. Post-mortem computed tomography angiography utilizing barium sulfate to identify microvascular structures : a preliminary phantom model and case study

    NARCIS (Netherlands)

    Haakma, Wieke; Rohde, Marianne; Kuster, Lidy; Uhrenholt, Lars; Pedersen, Michael; Boel, Lene Warner Thorup

    2016-01-01

    We investigated the use of computer tomography angiography (CTA) to visualize microvascular structures in a vessel-mimicking phantom and post-mortem (PM) bodies. A contrast agent was used based on 22% barium sulfate, 20% polyethylene glycol and 58% distilled water. A vessel-mimicking phantom

  8. AMPLE: a cluster-and-truncate approach to solve the crystal structures of small proteins using rapidly computed ab initio models.

    Science.gov (United States)

    Bibby, Jaclyn; Keegan, Ronan M; Mayans, Olga; Winn, Martyn D; Rigden, Daniel J

    2012-12-01

    Protein ab initio models predicted from sequence data alone can enable the elucidation of crystal structures by molecular replacement. However, the calculation of such ab initio models is typically computationally expensive. Here, a computational pipeline based on the clustering and truncation of cheaply obtained ab initio models for the preparation of structure ensembles is described. Clustering is used to select models and to quantitatively predict their local accuracy, allowing rational truncation of predicted inaccurate regions. The resulting ensembles, with or without rapidly added side chains, solved 43% of all test cases, with an 80% success rate for all-α proteins. A program implementing this approach, AMPLE, is included in the CCP4 suite of programs. It only requires the input of a FASTA sequence file and a diffraction data file. It carries out the modelling using locally installed Rosetta, creates search ensembles and automatically performs molecular replacement and model rebuilding.

  9. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  10. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  11. Structural Optimization in a Distributed Computing Environment

    National Research Council Canada - National Science Library

    Voon, B. K; Austin, M. A

    1991-01-01

    ...) optimization algorithm customized to a Distributed Numerical Computing environment (DNC). DNC utilizes networking technology and an ensemble of loosely coupled processors to compute structural analyses concurrently...

  12. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  13. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  14. Computational molecular modeling and structural rationalization for the design of a drug-loaded PLLA/PVA biopolymeric membrane

    International Nuclear Information System (INIS)

    Sibeko, B; Pillay, V; Choonara, Y E; Khan, R A; Danckwerts, M P; Modi, G; Iyuke, S E; Naidoo, D

    2009-01-01

    The purpose of this study was to design, characterize and assess the influence of triethanolamine (TEA) on the physicomechanical properties and release of methotrexate (MTX) from a composite biopolymeric membrane. Conjugated poly(L-lactic acid) (PLLA) and poly(vinyl alcohol) (PVA) membranes were prepared by immersion precipitation with and without the addition of TEA. Drug entrapment efficiency (DEE) and release studies were performed in phosphate buffered saline (pH 7.4, 37 deg. C). Scanning electron microscopy elucidated the membrane surface morphology. Computational and structural molecular modeling rationalized the potential mechanisms of membrane formation and MTX release. Bi-axial force-distance (F-D) extensibility profiles were generated to determine the membrane toughness, elasticity and fracturability. Membranes were significantly toughened by the addition of TEA as a discrete rubbery phase within the co-polymer matrix. MTX-TEA-PLLA-PVA membranes were tougher (F = 89 N) and more extensible (D = 8.79 mm) compared to MTX-PLLA-PVA (F = 35 N, D = 3.7 mm) membranes as a greater force of extension and fracture distance were required (N = 10). DEE values were relatively high (>80%, N = 5) for both formulations. Photomicrographs revealed distinct crystalline layered morphologies with macro-pores. MTX was released by tri-phasic kinetics with a lower fractional release of MTX from MTX-TEA-PLLA-PVA membranes compared to MTX-PLLA-PVA. TEA provided a synergistic approach to improving the membrane physicomechanical properties and modulation of MTX release. The composite biopolymeric membrane may therefore be suitable for the novel delivery of MTX in the treatment of chronic primary central nervous system lymphoma.

  15. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  16. Computational modeling of elastic properties of carbon nanotube/polymer composites with interphase regions. Part I: Micro-structural characterization and geometric modeling

    KAUST Repository

    Han, Fei

    2014-01-01

    A computational strategy to predict the elastic properties of carbon nanotube-reinforced polymer composites is proposed in this two-part paper. In Part I, the micro-structural characteristics of these nano-composites are discerned. These characteristics include networks/agglomerations of carbon nanotubes and thick polymer interphase regions between the nanotubes and the surrounding matrix. An algorithm is presented to construct three-dimensional geometric models with large amounts of randomly dispersed and aggregated nanotubes. The effects of the distribution of the nanotubes and the thickness of the interphase regions on the concentration of the interphase regions are demonstrated with numerical results. © 2013 Elsevier B.V. All rights reserved.

  17. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  18. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  19. Computer-aided structure analysis

    International Nuclear Information System (INIS)

    Szalontai, G.; Simon, Z.; Csapo, Z.; Farkas, M.; Pfeifer, Gy.

    1980-01-01

    The results obtained from the computer-aided interpretation of 13 C NMR and IR spectra using the artificial intelligence approach are presented. In its present state the output of the system is a list of functional groups which are resonable candidates for the final structural isomers. The input requires empirical formula, 13 C NMR data (off resonance data also) and IR spectral data. The confirmation of the presence of a functional group is based on comparison of the experimental data with the spectral properties of functional groups stored in a property matrix. If the molecular weight of the compounds studied is less or equal 500, the output contains usually 1.5-2.5 times more groups than really present, in most cases without the loss of the real ones. (author)

  20. Cofactors-loaded quaternary structure of lysine-specific demethylase 5C (KDM5C) protein: Computational model.

    Science.gov (United States)

    Peng, Yunhui; Alexov, Emil

    2016-12-01

    The KDM5C gene (also known as JARID1C and SMCX) is located on the X chromosome and encodes a ubiquitously expressed 1560-aa protein, which plays an important role in lysine methylation (specifically reverses tri- and di-methylation of Lys4 of histone H3). Currently, 13 missense mutations in KDM5C have been linked to X-linked mental retardation. However, the molecular mechanism of disease is currently unknown due to the experimental difficulties in expressing such large protein and the lack of experimental 3D structure. In this work, we utilize homology modeling, docking, and experimental data to predict 3D structures of KDM5C domains and their mutual arrangement. The resulting quaternary structure includes KDM5C JmjN, ARID, PHD1, JmjC, ZF domains, substrate histone peptide, enzymatic cofactors, and DNA. The predicted quaternary structure was investigated with molecular dynamic simulation for its stability, and further analysis was carried out to identify features measured experimentally. The predicted structure of KDM5C was used to investigate the effects of disease-causing mutations and it was shown that the mutations alter domain stability and inter-domain interactions. The structural model reported in this work could prompt experimental investigations of KDM5C domain-domain interaction and exploration of undiscovered functionalities. Proteins 2016; 84:1797-1809. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  2. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    This paper provides a general overview of the present status regarding computational modeling of the flow of fresh concrete. The computational modeling techniques that can be found in the literature may be divided into three main families: single fluid simulations, numerical modeling of discrete...

  3. Chaos Modelling with Computers

    Indian Academy of Sciences (India)

    The computer is to chaos what cloud chambers and particle accelerators are to particle-physics. Numbers and functions are chaos' mesons an~ quarks. In this article we provide an introduction to chaos and the role that computers play in this field. Chaos and Dynamical Systems. The laws of science aim at relating cause ...

  4. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  5. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  6. Computational Electronic Structure of Hemoglobin

    Science.gov (United States)

    Chachiyo, Teepanis; Rodriguez, Jorge H.

    2003-03-01

    Hemoglobin is an oxygen transporting protein whereby O2 binds reversibly to an iron-porphyrin active site. Upon binding of O2 the iron-porphyrin complex undergoes subtle structural rearrangements with a concomitant change from the ferrous (deoxyhemoglobin) to the ferric (oxyhemoglobin) oxidation states. We have studied the electronic structure of oxyhemoglobin within the framework of density functional theory (DFT). A geometrical model based on the X-ray crystallographic structure was fully optimized utilizing all-electron basis sets and gradient-corrected exchange correlation density functionals. As suggested by experiment, assuming that the molecular ground state was a singlet, the calculations showed an ``incipient" open-shell electronic structure. There was a very small but finite amount of spin density at the iron site and a spin density of equal magnitude but opposite sign localized on O_2. The bonding between Fe and O2 was dominated by two pairs of electrons nominally occupying d orbitals of Fe or π orbitals of O_2. However, strong electron delocalization was predicted between iron and dioxygen consistent with the incipient open-shell singlet configuration of the active site. Upon binding to iron, the bond length of O2 increased as compared to that of the free ligand due to weaker interaction among the two oxygens. Simulations of the binding process were carried out which show that the orientation of O2 with respect to the porphyrin plane follows a specific trend which minimizes the overall electronic energy. Finally, our calculations found a ``side-on" geometry, where both oxygens bind to Fe, as a stable but excited state configuration.

  7. Transparency of Environmental Computer Models

    NARCIS (Netherlands)

    Vos, de M.G.; Top, J.L.; van Hage, W.R.; Schreiber, A.Th.

    2013-01-01

    Environmental computer models are considered essential tools in supporting environmental decision making, but their main value is that they allow a better understanding of our complex environment. Despite numerous attempts to promote good modelling practice, transparency of current environmental

  8. Structural models of zebrafish (Danio rerio NOD1 and NOD2 NACHT domains suggest differential ATP binding orientations: insights from computational modeling, docking and molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Jitendra Maharana

    Full Text Available Nucleotide-binding oligomerization domain-containing protein 1 (NOD1 and NOD2 are cytosolic pattern recognition receptors playing pivotal roles in innate immune signaling. NOD1 and NOD2 recognize bacterial peptidoglycan derivatives iE-DAP and MDP, respectively and undergoes conformational alternation and ATP-dependent self-oligomerization of NACHT domain followed by downstream signaling. Lack of structural adequacy of NACHT domain confines our understanding about the NOD-mediated signaling mechanism. Here, we predicted the structure of NACHT domain of both NOD1 and NOD2 from model organism zebrafish (Danio rerio using computational methods. Our study highlighted the differential ATP binding modes in NOD1 and NOD2. In NOD1, γ-phosphate of ATP faced toward the central nucleotide binding cavity like NLRC4, whereas in NOD2 the cavity was occupied by adenine moiety. The conserved 'Lysine' at Walker A formed hydrogen bonds (H-bonds and Aspartic acid (Walker B formed electrostatic interaction with ATP. At Sensor 1, Arg328 of NOD1 exhibited an H-bond with ATP, whereas corresponding Arg404 of NOD2 did not. 'Proline' of GxP motif (Pro386 of NOD1 and Pro464 of NOD2 interacted with adenine moiety and His511 at Sensor 2 of NOD1 interacted with γ-phosphate group of ATP. In contrast, His579 of NOD2 interacted with the adenine moiety having a relatively inverted orientation. Our findings are well supplemented with the molecular interaction of ATP with NLRC4, and consistent with mutagenesis data reported for human, which indicates evolutionary shared NOD signaling mechanism. Together, this study provides novel insights into ATP binding mechanism, and highlights the differential ATP binding modes in zebrafish NOD1 and NOD2.

  9. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  10. Coherent information structure in complex computation.

    Science.gov (United States)

    Lizier, Joseph T; Prokopenko, Mikhail; Zomaya, Albert Y

    2012-09-01

    We have recently presented a framework for the information dynamics of distributed computation that locally identifies the component operations of information storage, transfer, and modification. We have observed that while these component operations exist to some extent in all types of computation, complex computation is distinguished in having coherent structure in its local information dynamics profiles. In this article, we conjecture that coherent information structure is a defining feature of complex computation, particularly in biological systems or artificially evolved computation that solves human-understandable tasks. We present a methodology for studying coherent information structure, consisting of state-space diagrams of the local information dynamics and a measure of structure in these diagrams. The methodology identifies both clear and "hidden" coherent structure in complex computation, most notably reconciling conflicting interpretations of the complexity of the Elementary Cellular Automata rule 22.

  11. Structure problems in the analog computation

    International Nuclear Information System (INIS)

    Braffort, P.L.

    1957-01-01

    The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)

  12. Modern Theories of Pelvic Floor Support : A Topical Review of Modern Studies on Structural and Functional Pelvic Floor Support from Medical Imaging, Computational Modeling, and Electromyographic Perspectives.

    Science.gov (United States)

    Peng, Yun; Miller, Brandi D; Boone, Timothy B; Zhang, Yingchun

    2018-02-12

    Weakened pelvic floor support is believed to be the main cause of various pelvic floor disorders. Modern theories of pelvic floor support stress on the structural and functional integrity of multiple structures and their interplay to maintain normal pelvic floor functions. Connective tissues provide passive pelvic floor support while pelvic floor muscles provide active support through voluntary contraction. Advanced modern medical technologies allow us to comprehensively and thoroughly evaluate the interaction of supporting structures and assess both active and passive support functions. The pathophysiology of various pelvic floor disorders associated with pelvic floor weakness is now under scrutiny from the combination of (1) morphological, (2) dynamic (through computational modeling), and (3) neurophysiological perspectives. This topical review aims to update newly emerged studies assessing pelvic floor support function among these three categories. A literature search was performed with emphasis on (1) medical imaging studies that assess pelvic floor muscle architecture, (2) subject-specific computational modeling studies that address new topics such as modeling muscle contractions, and (3) pelvic floor neurophysiology studies that report novel devices or findings such as high-density surface electromyography techniques. We found that recent computational modeling studies are featured with more realistic soft tissue constitutive models (e.g., active muscle contraction) as well as an increasing interest in simulating surgical interventions (e.g., artificial sphincter). Diffusion tensor imaging provides a useful non-invasive tool to characterize pelvic floor muscles at the microstructural level, which can be potentially used to improve the accuracy of the simulation of muscle contraction. Studies using high-density surface electromyography anal and vaginal probes on large patient cohorts have been recently reported. Influences of vaginal delivery on the

  13. Structure Elucidation of Mixed-Linker Zeolitic Imidazolate Frameworks by Solid-State (1)H CRAMPS NMR Spectroscopy and Computational Modeling.

    Science.gov (United States)

    Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar

    2016-06-15

    Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The

  14. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  15. Computer graphics in piping structural engineering

    International Nuclear Information System (INIS)

    Revesz, Z.

    1985-01-01

    Computer graphics in piping structural engineering is gaining in popularity. The large number of systems, the growing complexity of the load cases and structure models require human assimilation of large amounts of data. An effort has been made to enlighten evaluation of numerical data and visualize as much of it as possible, thus eliminating a source of error and accelerating analysis/reporting. The product of this effort is PAID, the Piping Analysis and Interactive Design software. While developing PAID, interest has been focused on the acceleration of the work done mainly by PIPESTRESS. Some installed and tested capabilities of PAID are presented in this paper. Examples are given from the graphic output in report form and the conversation necessary to get such is demonstrated. (orig.)

  16. Fast Computation of CMH Model

    Science.gov (United States)

    Patel, Umesh D.; DellaTorre, Edward; Day, John H. (Technical Monitor)

    2001-01-01

    A fast differential equation approach for the DOK model has been extented to the CMH model. Also, a cobweb technique for calculating the CMH model is also presented. The two techniques are contrasted from the point of view of flexibility and computation time.

  17. Computational modelling in fluid mechanics

    International Nuclear Information System (INIS)

    Hauguel, A.

    1985-01-01

    The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr

  18. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    This work focuses on the development of a computer-aided modeling framework. The framework is a knowledge-based system that is built on a generic modeling language and structured on workflows for different modeling tasks. The overall objective is to support model developers and users to generate ....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  19. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  20. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  1. Computer simulations and modeling-assisted ToxR screening in deciphering 3D structures of transmembrane α-helical dimers: ephrin receptor A1

    International Nuclear Information System (INIS)

    Volynsky, P E; Mineeva, E A; Goncharuk, M V; Ermolyuk, Ya S; Arseniev, A S; Efremov, R G

    2010-01-01

    Membrane-spanning segments of numerous proteins (e.g. receptor tyrosine kinases) represent a novel class of pharmacologically important targets, whose activity can be modulated by specially designed artificial peptides, the so-called interceptors. Rational construction of such peptides requires understanding of the main factors driving peptide–peptide association in lipid membranes. Here we present a new method for rapid prediction of the spatial structure of transmembrane (TM) helix–helix complexes. It is based on computer simulations in membrane-like media and subsequent refinement/validation of the results using experimental studies of TM helix dimerization in a bacterial membrane by means of the ToxR system. The approach was applied to TM fragments of the ephrin receptor A1 (EphA1). A set of spatial structures of the dimer was proposed based on Monte Carlo simulations in an implicit membrane followed by molecular dynamics relaxation in an explicit lipid bilayer. The resulting models were employed for rational design of wild-type and mutant genetic constructions for ToxR assays. The computational and the experimental data are self-consistent and provide an unambiguous spatial model of the TM dimer of EphA1. The results of this work can be further used to develop new biologically active 'peptide interceptors' specifically targeting membrane domains of proteins

  2. Improving Students' Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture

    Science.gov (United States)

    Springer, Michael T.

    2014-01-01

    Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…

  3. Computational predictions of zinc oxide hollow structures

    Science.gov (United States)

    Tuoc, Vu Ngoc; Huan, Tran Doan; Thao, Nguyen Thi

    2018-03-01

    Nanoporous materials are emerging as potential candidates for a wide range of technological applications in environment, electronic, and optoelectronics, to name just a few. Within this active research area, experimental works are predominant while theoretical/computational prediction and study of these materials face some intrinsic challenges, one of them is how to predict porous structures. We propose a computationally and technically feasible approach for predicting zinc oxide structures with hollows at the nano scale. The designed zinc oxide hollow structures are studied with computations using the density functional tight binding and conventional density functional theory methods, revealing a variety of promising mechanical and electronic properties, which can potentially find future realistic applications.

  4. A computational model of the cerebellum

    Energy Technology Data Exchange (ETDEWEB)

    Travis, B.J.

    1990-01-01

    The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.

  5. Skin friction blistering: computer model.

    Science.gov (United States)

    Xing, Malcolm; Pan, Ning; Zhong, Wen; Maibach, Howard

    2007-08-01

    Friction blisters, a common injury in sports and military operations, can adversely effect or even halt performance. Given its frequency and hazardous nature, recent research efforts appear limited. Blistering can be treated as a delamination phenomenon; similar issues in materials science have been extensively investigated in theory and experiment. An obstacle in studying blistering is the difficulty of conducting experiment on humans and animals. Computer modeling thus becomes a preferred tool. This paper used a dynamic non-linear finite-element model with a blister-characterized structure and contact algorithm for outer materials and blister roof to investigate the effects on deformation and stress of an existing blister by changing the friction coefficient and elastic modulus of the material in contact with the blister. Through the dynamics mode and harmonic frequency approach, we demonstrated that the loading frequency leads to dramatic changes of displacement and stress in spite of otherwise similar loading. Our simulations show that an increased friction coefficient does not necessarily result in an increase in either the stress on the hot spot or blister deformation; local maximum friction stress and Von Mises stress exist for some friction coefficients over the wide range examined here. In addition, the stiffness of contact material on blistering is also investigated, and no significant effects on deformation and Von Mises stress are found, again at the range used. The model and method provided here may be useful for evaluating loading environments and contact materials in reducing blistering incidents. The coupling finite-element model can predict the effects of friction coefficient and contacting materials&apos stiffness on blister deformation and hot spot stress.

  6. Computational structural biology: methods and applications

    National Research Council Canada - National Science Library

    Schwede, Torsten; Peitsch, Manuel Claude

    2008-01-01

    ... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...

  7. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  8. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  9. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    Science.gov (United States)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  10. A Computational Framework for Realistic Retina Modeling.

    Science.gov (United States)

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  11. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  12. Computational model of the HIV-1 subtype A V3 loop: study on the conformational mobility for structure-based anti-AIDS drug design.

    Science.gov (United States)

    Andrianov, Alexander M; Anishchenko, Ivan V

    2009-10-01

    The V3 loop of the HIV-1gp120 glycoprotein presenting 35-residue-long, frequently glycosylated, highly variable, and disulfide bonded structure plays the central role in the virus biology and forms the principal target for neutralizing antibodies and the major viral determinant for co-receptor binding. Here we present the computer-aided studies on the 3D structure of the HIV-1 subtype A V3 loop (SA-V3 loop) in which its structurally inflexible regions and individual amino acids were identified and the structure-function analysis of V3 aimed at the informational support for anti-AIDS drug researches was put into practice. To this end, the following successive steps were carried out: (i) using the methods of comparative modeling and simulated annealing, the ensemble of the low-energy structures was generated for the consensus amino acid sequence of the SA-V3 loop and its most probable conformation was defined basing on the general criteria widely adopted as a measure of the quality of protein structures in terms of their 3D folds and local geometry; (ii) the elements of secondary V3 structures in the built conformations were characterized and careful analysis of the corresponding data arising from experimental observations for the V3 loops in various HIV-1 strains was made; (iii) to reveal common structural motifs in the HIV-1 V3 loops regardless of their sequence variability and medium inconstancy, the simulated structures were collated with each other as well as with those of V3 deciphered by NMR spectroscopy and X-ray studies for diverse virus isolates in different environments; (iv) with the object of delving into the conformational features of the SA-V3 loop, molecular dynamics trajectory was computed from its static 3D structure followed by determining the structurally rigid V3 segments and comparing the findings obtained with the ones derived hereinbefore; and (v) to evaluate the masking effect that can occur due to interaction of the SA-V3 loop with the two

  13. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  14. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  15. Novel computational methodologies for structural modeling of spacious ligand binding sites of G-protein-coupled receptors: development and application to human leukotriene B4 receptor.

    Science.gov (United States)

    Ishino, Yoko; Harada, Takanori

    2012-01-01

    This paper describes a novel method to predict the activated structures of G-protein-coupled receptors (GPCRs) with high accuracy, while aiming for the use of the predicted 3D structures in in silico virtual screening in the future. We propose a new method for modeling GPCR thermal fluctuations, where conformation changes of the proteins are modeled by combining fluctuations on multiple time scales. The core idea of the method is that a molecular dynamics simulation is used to calculate average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the picosecond or nanosecond time scale, and then evolutionary computation including receptor-ligand docking simulations functions to determine the rotation angle of each helix of a GPCR protein as a movement on a longer time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the proposed method was able to derive the appropriate 3D structure of the active-state GPCR which docks with its agonists.

  16. Novel Computational Methodologies for Structural Modeling of Spacious Ligand Binding Sites of G-Protein-Coupled Receptors: Development and Application to Human Leukotriene B4 Receptor

    Directory of Open Access Journals (Sweden)

    Yoko Ishino

    2012-01-01

    Full Text Available This paper describes a novel method to predict the activated structures of G-protein-coupled receptors (GPCRs with high accuracy, while aiming for the use of the predicted 3D structures in in silico virtual screening in the future. We propose a new method for modeling GPCR thermal fluctuations, where conformation changes of the proteins are modeled by combining fluctuations on multiple time scales. The core idea of the method is that a molecular dynamics simulation is used to calculate average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the picosecond or nanosecond time scale, and then evolutionary computation including receptor-ligand docking simulations functions to determine the rotation angle of each helix of a GPCR protein as a movement on a longer time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the proposed method was able to derive the appropriate 3D structure of the active-state GPCR which docks with its agonists.

  17. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    . In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...... with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene, and, for the performance evaluation of an atomizer product. In the first case study, the reactor type is where the reactions are thermodynamically limited......, such as, steam reforming and the production of olefins from inexpensive paraffins via dehydrogenation. The generated process model is based on Fickian diffusion model, which is the most widely used to account for the intraparticle mass transfer resistance. The model of the process can help to predict...

  18. Computational models of adult neurogenesis

    Science.gov (United States)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  19. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  20. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system. [Controls/Structure Interaction

    Science.gov (United States)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  1. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  2. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...

  3. Soil structure characterized using computed tomographic images

    Science.gov (United States)

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  4. Data Structures in Classical and Quantum Computing

    NARCIS (Netherlands)

    M.J. Fillinger (Max)

    2013-01-01

    textabstractThis survey summarizes several results about quantum computing related to (mostly static) data structures. First, we describe classical data structures for the set membership and the predecessor search problems: Perfect Hash tables for set membership by Fredman, Koml\\'{o}s and

  5. Regularized Structural Equation Modeling

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  6. Hybrid modeling in computational neuropsychiatry.

    Science.gov (United States)

    Marin-Sanguino, A; Mendoza, E R

    2008-09-01

    The aim of building mathematical models is to provide a formal structure to explain the behaviour of a whole in terms of its parts. In the particular case of neuropsychiatry, the available information upon which models are to be built is distributed over several fields of expertise. Molecular and cellular biologists, physiologists and clinicians all hold valuable information about the system which has to be distilled into a unified view. Furthermore, modelling is not a sequential process in which the roles of field and modelling experts are separated. Model building is done through iterations in which all the parts have to keep an active role. This work presents some modelling techniques and guidelines on how they can be combined in order to simplify modelling efforts in neuropsychiatry. The proposed approach involves two well known modelling techniques, Petri nets and Biochemical System Theory that provide a general well proven structured definition for biological models.

  7. DFI Computer Modeling Software (CMS)

    Energy Technology Data Exchange (ETDEWEB)

    Cazalet, E.G.; Deziel, L.B. Jr.; Haas, S.M.; Martin, T.W.; Nesbitt, D.M.; Phillips, R.L.

    1979-10-01

    The data base management system used to create, edit and store models data and solutions for the LEAP system is described. The software is entirely in FORTRAN-G for the IBM 370 series of computers and provides interface with a commercial data base system SYSTEM-2000.

  8. Computational Modeling in Liver Surgery

    Directory of Open Access Journals (Sweden)

    Bruno Christ

    2017-11-01

    Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.

  9. Characterization of macropore structure of Malan loess in NW China based on 3D pipe models constructed by using computed tomography technology

    Science.gov (United States)

    Li, Yanrong; He, Shengdi; Deng, Xiaohong; Xu, Yongxin

    2018-04-01

    Malan loess is a grayish yellow or brownish yellow, clastic, highly porous and brittle late Quaternary sediment formed by the accumulation of windblown dust. The present-day pore structure of Malan loess is crucial for understanding the loessification process in history, loess strengths and mechanical behavior. This study employed a modern computed tomography (CT) device to scan Malan loess samples, which were obtained from the east part of the Loess Plateau of China. A sophisticated and efficient workflow for processing the CT images and constructing 3D pore models was established by selecting and programming relevant mathematical algorithms in MATLAB, such as the maximum entropy method, medial axis method, and node recognition algorithm. Individual pipes within the Malan loess were identified and constructed by partitioning and recombining links in the 3D pore model. The macropore structure of Malan loess was then depicted using quantitative parameters. The parameters derived from 2D images of CT scanning included equivalent radius, length and aspect ratio of pores, porosity, and pore distribution entropy, whereas those derived from the constructed 3D structure models included porosity, coordination number, node density, pipe radius, length, length density, dip angle, and dip direction. The analysis of these parameters revealed that Malan loess is a strongly anisotropic geomaterial with a dense and complex network of pores and pipes. The pores discovered on horizontal images, perpendicular to the vertical direction, were round and relatively uniform in shape and size and evenly distributed, whereas the pores discovered on vertical images varied in shape and size and were distributed in clusters. The pores showed good connectivity in vertical direction and formed vertically aligned pipes but displayed weak connectivity in horizontal directions. The pipes in vertical direction were thick, long, and straight compared with those in horizontal directions. These

  10. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  11. PRODUCT STRUCTURE DIGITAL MODEL

    Directory of Open Access Journals (Sweden)

    V.M. Sineglazov

    2005-02-01

    Full Text Available  Research results of representation of product structure made by means of CADDS5 computer-aided design (CAD system, Product Data Management Optegra (PDM system and Product Life Cycle Management Wind-chill system (PLM, are examined in this work. Analysis of structure component development and its storage in various systems is carried out. Algorithms of structure transformation required for correct representation of the structure are considered. Management analysis of electronic mockup presentation of the product structure is carried out for Windchill system.

  12. Modelling global computations with KLAIM.

    Science.gov (United States)

    De Nicola, Rocco; Loreti, Michele

    2008-10-28

    A new area of research, known as Global Computing, is by now well established. It aims at defining new models of computation based on code and data mobility over wide-area networks with highly dynamic topologies, and at providing infrastructures to support coordination and control of components originating from different, possibly untrusted, fault-prone, malicious or selfish sources. In this paper, we present our contribution to the field of Global Computing that is centred on Kernel Language for Agents Interaction and Mobility (KLAIM). KLAIM is an experimental language specifically designed to programme distributed systems consisting of several mobile components that interact through multiple distributed tuple spaces. We present some of the key notions of the language and discuss how its formal semantics can be exploited to reason about qualitative and quantitative aspects of the specified systems.

  13. Structured brain computing and its learning

    International Nuclear Information System (INIS)

    Ae, Tadashi; Araki, Hiroyuki; Sakai, Keiichi

    1999-01-01

    We have proposed a two-level architecture for brain computing, where two levels are introduced for processing of meta-symbol. At level 1 a conventional pattern recognition is performed, where neural computation is included, and its output gives the meta-symbol which is a symbol enlarged from a symbol to a kind of pattern. At Level 2 an algorithm acquisition is made by using a machine for abstract states. We are also developing the VLSI chips at each level for SBC (Structured Brain Computer) Ver.1.0

  14. A comprehensive computational mutation structure- function ...

    African Journals Online (AJOL)

    function relationship of poliovirus 2A protease using various bioinformatics tools. Methods: The three-dimensional structure of 2Apro was modelled and analyzed using the crystal structure of .... Development Organization) then cloned into a.

  15. Development of a Computer Application to Simulate Porous Structures

    Directory of Open Access Journals (Sweden)

    S.C. Reis

    2002-09-01

    Full Text Available Geometric modeling is an important tool to evaluate structural parameters as well as to follow the application of stereological relationships. The obtention, visualization and analysis of volumetric images of the structure of materials, using computational geometric modeling, facilitates the determination of structural parameters of difficult experimental access, such as topological and morphological parameters. In this work, we developed a geometrical model implemented by computer software that simulates random pore structures. The number of nodes, number of branches (connections between nodes and the number of isolated parts, are obtained. Also, the connectivity (C is obtained from this application. Using a list of elements, nodes and branches, generated by the software, in AutoCAD® command line format, the obtained structure can be viewed and analyzed.

  16. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  17. Cosmic logic: a computational model

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps

  18. Performance Driven Design and Design Information Exchange : Establishing a computational design methodology for parametric and performance-driven design of structures via topology optimization for rough structurally informed design models

    NARCIS (Netherlands)

    Mostafavi, S.; Morales Beltran, M.G.; Biloria, N.M.

    2013-01-01

    This paper presents a performance driven computational design methodology through introducing a case on parametric structural design. The paper describes the process of design technology development and frames a design methodology through which engineering, -in this case structural- aspects of

  19. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  20. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  1. Computational Modeling of Adult Neurogenesis

    Science.gov (United States)

    Aimone, James B.

    2016-01-01

    The restriction of adult neurogenesis to only a handful of regions of the brain is suggestive of some shared requirement for this dramatic form of structural plasticity. However, a common driver across neurogenic regions has not yet been identified. Computational studies have been invaluable in providing insight into the functional role of new neurons; however, researchers have typically focused on specific scales ranging from abstract neural networks to specific neural systems, most commonly the dentate gyrus area of the hippocampus. These studies have yielded a number of diverse potential functions for new neurons, ranging from an impact on pattern separation to the incorporation of time into episodic memories to enabling the forgetting of old information. This review will summarize these past computational efforts and discuss whether these proposed theoretical functions can be unified into a common rationale for why neurogenesis is required in these unique neural circuits. PMID:26933191

  2. PSPP: a protein structure prediction pipeline for computing clusters.

    Directory of Open Access Journals (Sweden)

    Michael S Lee

    2009-07-01

    Full Text Available Protein structures are critical for understanding the mechanisms of biological systems and, subsequently, for drug and vaccine design. Unfortunately, protein sequence data exceed structural data by a factor of more than 200 to 1. This gap can be partially filled by using computational protein structure prediction. While structure prediction Web servers are a notable option, they often restrict the number of sequence queries and/or provide a limited set of prediction methodologies. Therefore, we present a standalone protein structure prediction software package suitable for high-throughput structural genomic applications that performs all three classes of prediction methodologies: comparative modeling, fold recognition, and ab initio. This software can be deployed on a user's own high-performance computing cluster.The pipeline consists of a Perl core that integrates more than 20 individual software packages and databases, most of which are freely available from other research laboratories. The query protein sequences are first divided into domains either by domain boundary recognition or Bayesian statistics. The structures of the individual domains are then predicted using template-based modeling or ab initio modeling. The predicted models are scored with a statistical potential and an all-atom force field. The top-scoring ab initio models are annotated by structural comparison against the Structural Classification of Proteins (SCOP fold database. Furthermore, secondary structure, solvent accessibility, transmembrane helices, and structural disorder are predicted. The results are generated in text, tab-delimited, and hypertext markup language (HTML formats. So far, the pipeline has been used to study viral and bacterial proteomes.The standalone pipeline that we introduce here, unlike protein structure prediction Web servers, allows users to devote their own computing assets to process a potentially unlimited number of queries as well as perform

  3. Modeling Fluid Structure Interaction

    National Research Council Canada - National Science Library

    Benaroya, Haym

    2000-01-01

    The principal goal of this program is on integrating experiments with analytical modeling to develop physics-based reduced-order analytical models of nonlinear fluid-structure interactions in articulated naval platforms...

  4. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Structured grid generator on parallel computers

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Murakami, Hiroyuki; Higashida, Akihiro; Yanagisawa, Ichiro.

    1997-03-01

    A general purpose structured grid generator on parallel computers, which generates a large-scale structured grid efficiently, has been developed. The generator is applicable to Cartesian, cylindrical and BFC (Boundary-Fitted Curvilinear) coordinates. In case of BFC grids, there are three adaptable topologies; L-type, O-type and multi-block type, the last of which enables any combination of L- and O-grids. Internal BFC grid points can be automatically generated and smoothed by either algebraic supplemental method or partial differential equation method. The partial differential equation solver is implemented on parallel computers, because it consumes a large portion of overall execution time. Therefore, high-speed processing of large-scale grid generation can be realized by use of parallel computer. Generated grid data are capable to be adjusted to domain decomposition for parallel analysis. (author)

  6. Transparency of Computational Intelligence Models

    Science.gov (United States)

    Owotoki, Peter; Mayer-Lindenberg, Friedrich

    This paper introduces the behaviour of transparency of computational intelligence (CI) models. Transparency reveals to end users the underlying reasoning process of the agent embodying CI models. This is of great benefit in applications (e.g. data mining, entertainment and personal robotics) with humans as end users because it increases their trust in the decisions of the agent and their acceptance of its results. Our integrated approach, wherein rules are just one of other transparency factors (TF), differs from previous related efforts which have focused mostly on generation of comprehensible rules as explanations. Other TF include degree of confidence measure and visualization of principal features. The transparency quotient is introduced as a measure of the transparency of models based on these factors. The transparency enabled generalized exemplar model has been developed to demonstrate the TF and transparency concepts introduced in this paper.

  7. Computer architecture evaluation for structural dynamics computations: Project summary

    Science.gov (United States)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  8. Towards a Tool for Computer Supported Structuring of Products

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp

    1997-01-01

    . However, a product possesses not only a component structure but also various organ structures which are superimposed on the component structure. The organ structures carry behaviour and make the product suited for its life phases.Our long-term research goal is to develop a computer-based system...... that is capable of supporting synthesis activities in engineering design, and thereby also support handling of various organ structures. Such a system must contain a product model, in which it is possible to describe and manipulate both various organ structures and the component structure.In this paper we focus...... on the relationships between organ structures and the component structure. By an analysis of an existing product it is shown that a component may contribute to more than one organ. A set of organ structures is identified and their influence on the component strucute is illustrated....

  9. Computer-aided design of antenna structures and components

    Science.gov (United States)

    Levy, R.

    1976-01-01

    This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.

  10. Conflicts of interest improve collective computation of adaptive social structures.

    Science.gov (United States)

    Brush, Eleanor R; Krakauer, David C; Flack, Jessica C

    2018-01-01

    In many biological systems, the functional behavior of a group is collectively computed by the system's individual components. An example is the brain's ability to make decisions via the activity of billions of neurons. A long-standing puzzle is how the components' decisions combine to produce beneficial group-level outputs, despite conflicts of interest and imperfect information. We derive a theoretical model of collective computation from mechanistic first principles, using results from previous work on the computation of power structure in a primate model system. Collective computation has two phases: an information accumulation phase, in which (in this study) pairs of individuals gather information about their fighting abilities and make decisions about their dominance relationships, and an information aggregation phase, in which these decisions are combined to produce a collective computation. To model information accumulation, we extend a stochastic decision-making model-the leaky integrator model used to study neural decision-making-to a multiagent game-theoretic framework. We then test alternative algorithms for aggregating information-in this study, decisions about dominance resulting from the stochastic model-and measure the mutual information between the resultant power structure and the "true" fighting abilities. We find that conflicts of interest can improve accuracy to the benefit of all agents. We also find that the computation can be tuned to produce different power structures by changing the cost of waiting for a decision. The successful application of a similar stochastic decision-making model in neural and social contexts suggests general principles of collective computation across substrates and scales.

  11. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  12. Computational structural analysis and finite element methods

    CERN Document Server

    Kaveh, A

    2014-01-01

    Graph theory gained initial prominence in science and engineering through its strong links with matrix algebra and computer science. Moreover, the structure of the mathematics is well suited to that of engineering problems in analysis and design. The methods of analysis in this book employ matrix algebra, graph theory and meta-heuristic algorithms, which are ideally suited for modern computational mechanics. Efficient methods are presented that lead to highly sparse and banded structural matrices. The main features of the book include: application of graph theory for efficient analysis; extension of the force method to finite element analysis; application of meta-heuristic algorithms to ordering and decomposition (sparse matrix technology); efficient use of symmetry and regularity in the force method; and simultaneous analysis and design of structures.

  13. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology...... adoption theories, such as Diffusion of Innovations, Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology. Further on, at research model for identification of Cloud Computing Adoption factors from a business model perspective is presented. The following business model building...

  14. Modeling Computer Virus and Its Dynamics

    OpenAIRE

    Peng, Mei; He, Xing; Huang, Junjian; Dong, Tao

    2013-01-01

    Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that th...

  15. Modeling of soil-water-structure interaction

    DEFF Research Database (Denmark)

    Tang, Tian

    to dynamic ocean waves. The goal of this research project is to develop numerical soil models for computing realistic seabed response in the interacting offshore environment, where ocean waves, seabed and offshore structure highly interact with each other. The seabed soil models developed are based...... as the developed nonlinear soil displacements and stresses under monotonic and cyclic loading. With the FVM nonlinear coupled soil models as a basis, multiphysics modeling of wave-seabed-structure interaction is carried out. The computations are done in an open source code environment, OpenFOAM, where FVM models...... of Computational Fluid Dynamics (CFD) and structural mechanics are available. The interaction in the system is modeled in a 1-way manner: First detailed free surface CFD calculations are executed to obtain a realistic wave field around a given structure. Then the dynamic structural response, due to the motions...

  16. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  17. Parallel structures in human and computer memory

    Science.gov (United States)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  18. Solving graph problems with dynamic computation structures

    Science.gov (United States)

    Babb, Jonathan W.; Frank, Matthew; Agarwal, Anant

    1996-10-01

    We introduce dynamic computation structures (DCS), a compilation technique to produce dynamic code for reconfigurable computing. DCS specializes directed graph instances into user-level hardware for reconfigurable architectures. Several problems such as shortest path and transitive closure exhibit the general properties of closed semirings, an algebraic structure for solving directed paths. Motivating our application domain choice of closed semiring problems is the fact that logic emulation software already maps a special case of directed graphs, namely logic netlists, onto arrays of field programmable gate arrays (FPGA). A certain type of logic emulation software called virtual wires further allows an FPGA array to be viewed as a machine-independent computing fabric. Thus, a virtual wires compiler, coupled with front-end commercial behavioral logic synthesis software, enables automatic behavioral compilation into a multi-FPGA computing fabric. We have implemented a DCS front-end compiler to parallelize the entire inner loop of the classic Bellman-Ford algorithm into synthesizable behavioral verilog. Leveraging virtual wire compilation and behavioral synthesis, we have automatically generated designs of 14 to 261 FPGAs from a single graph instance. We achieve speedups proportional to the number of graph edges - - from 10X to almost 400X versus a 125 SPECint SparcStation 10.

  19. Geometric modeling for computer aided design

    Science.gov (United States)

    Schwing, James L.

    1993-01-01

    Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.

  20. COMPUTATIONAL STUDY OF THE UNSTEADY FLOW STRUCTURES AROUND TWO VEHICLES

    Directory of Open Access Journals (Sweden)

    TUTUNEA Dragos

    2014-07-01

    Full Text Available In this paper an experimental method for the investigations of flow structures encountered by automobiles was performed on two different vehicles. Currently are two methods to measure the drag, the first is to simulate the air flow with computational fluid dynamics and the second is to use a wind tunnel. Two cars were modeled to observe and track the flow structure around the bodies. This computational research will be used as an inexpensive experimental method to study the phenomenon of air flow in automotive industry.

  1. Present status of structural analysis computer programs

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Sanokawa, Konomo; Takeda, Hiroshi.

    1981-01-01

    The computer programs for the structural analysis by finite element method have been used widely, and the authors carried out the bench mark test on the computer programs for finite element method already. As the result, they pointed out a number of problems concerning the use of the computer programs for finite element method. In this paper, the details of their development, the analytical function and the examples of calculation are described centering around the versatile computer programs used for the previous study. As the versatile computer programs for finite element method, ANSYS developed by Swanson Analysis System Co., USA, ASKA developed by ISD, West Germany, MARC developed by MARC Analysis Research Institute, NASTRAN developed by NASA, USA, SAP-4 developed by University of California, ADINA developed by MIT, NEPSAP developed by Lockheed Missile Space Co., BERSAFE developed by CEGB, Great Britain, EPACA developed by Franklin Research Institute, USA, and CREEP-PLAST developed by GE are briefly introduced. As the exampled of calculation, the thermal elastoplastic creep analysis of a cylinder by ANSYS, the elastoplastic analysis of a pressure vessel by ASKA, the analysis of a plate with double cracks by MARC, the analysis of the buckling of a shallow arch by MSC-NASTRAN, and the elastoplastic analysis of primary cooling pipes by ADINA are explained. (Kako, I.)

  2. Enhancement of the Computational Efficiency of Membrane Computing Models

    National Research Council Canada - National Science Library

    Das, Digendra K

    2007-01-01

    .... Membrane computing consists of cell-like membranes placed inside a unique skin membrane. In regions delimited by a membrane structure, cells are placed in multisets of objects which evolve according to evolution rules associated with the regions...

  3. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  4. A computational tool to design and generate crystal structures

    Science.gov (United States)

    Ferreira, R. C.; Vieira, M. B.; Dantas, S. O.; Lobosco, M.

    2014-03-01

    The evolution of computers, more specifically regarding the increased storage and data processing capacity, allowed the construction of computational tools for the simulation of physical and chemical phenomena. Thus, practical experiments are being replaced, in some cases, by computational ones. In this context, we can highlight models used to simulate different phenomena on atomic scale. The construction of these simulators requires, by developers, the study and definition of accurate and reliable models. This complexity is often reflected in the construction of complex simulators, which simulate a limited group of structures. Such structures are sometimes expressed in a fixed manner using a limited set of geometric shapes. This work proposes a computational tool that aims to generate a set of crystal structures. The proposed tool consists of a) a programming language, which is used to describe the structures using for this purpose their characteristic functions and CSG (Constructive Solid Geometry) operators, and b) a compiler/interpreter that examines the source code written in the proposed language, and generates the objects accordingly. This tool enables the generation of an unrestricted number of structures, which can be incorporated in simulators such as the Monte Carlo Spin Engine, developed by our group at UFJF.

  5. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  6. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  7. Novel Computational Methodologies for Structural Modeling of Spacious Ligand Binding Sites of G-Protein-Coupled Receptors: Development and Application to Human Leukotriene B4 Receptor

    OpenAIRE

    Ishino, Yoko; Harada, Takanori

    2012-01-01

    This paper describes a novel method to predict the activated structures of G-protein-coupled receptors (GPCRs) with high accuracy, while aiming for the use of the predicted 3D structures in in silico virtual screening in the future. We propose a new method for modeling GPCR thermal fluctuations, where conformation changes of the proteins are modeled by combining fluctuations on multiple time scales. The core idea of the method is that a molecular dynamics simulation is used to calculate avera...

  8. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2015-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789

  9. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  10. Soil structure changes evaluated with computed tomography

    International Nuclear Information System (INIS)

    Pires, Luiz Fernando

    2010-01-01

    The objective of this work was to evaluate in millimetric scale changes in soil bulk density and porosity, using the gamma-ray computed tomography in soil samples with disturbed structure due to wetting and drying (W-D) cycles. Soil samples with 98.1 cm 3 were sieved using a 2 mm mesh and homogeneously packed in PVC cylinders. Soil samples were submitted to 1, 2, and 3 W-D cycles. Control samples were not submitted to W-D cycles. After repetitions of W-D cycles, soil sample porosity decreased and soil layers became denser. Computed tomography allowed a continuous analysis of soil bulk density and also soil porosity along millimetric (0.08 cm) layers, what cannot be provided by traditional methods used in soil physics. (author)

  11. Computational Modeling and Simulation of Developmental ...

    Science.gov (United States)

    Standard practice for assessing developmental toxicity is the observation of apical endpoints (intrauterine death, fetal growth retardation, structural malformations) in pregnant rats/rabbits following exposure during organogenesis. EPA’s computational toxicology research program (ToxCast) generated vast in vitro cellular and molecular effects data on >1858 chemicals in >600 high-throughput screening (HTS) assays. The diversity of assays has been increased for developmental toxicity with several HTS platforms, including the devTOX-quickPredict assay from Stemina Biomarker Discovery utilizing the human embryonic stem cell line (H9). Translating these HTS data into higher order-predictions of developmental toxicity is a significant challenge. Here, we address the application of computational systems models that recapitulate the kinematics of dynamical cell signaling networks (e.g., SHH, FGF, BMP, retinoids) in a CompuCell3D.org modeling environment. Examples include angiogenesis (angiodysplasia) and dysmorphogenesis. Being numerically responsive to perturbation, these models are amenable to data integration for systems Toxicology and Adverse Outcome Pathways (AOPs). The AOP simulation outputs predict potential phenotypes based on the in vitro HTS data ToxCast. A heuristic computational intelligence framework that recapitulates the kinematics of dynamical cell signaling networks in the embryo, together with the in vitro profiling data, produce quantitative predic

  12. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  13. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  14. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  15. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  16. SBIR PHASE I FINAL REPORT: Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Kurth, Elizabeth A. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Kennedy, James C. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States)

    2013-12-02

    fabrication costs. VFT currently is tied to a commercial solver which makes it prohibitively expensive for use by SMEs, as there is a significant licensing cost for the solver - over and above for the relatively minimal cost for VFT. Emc2 developed this software code over a number of years in close cooperation with CAT (Peoria, IL), who currently uses this code exclusively for worldwide fabrication, product design and development activities. The use of VFT has allowed CAT to move directly from design to product fabrication and helped eliminate (to a large extent) new product prototyping and subsequent testing. Additionally, CAT has been able to eliminate/reduce costly one-of-a-kind appliances used to reduce distortion effects due to fabrication. In this context, SMEs can realize the same kind of improved product quality and reduced cost through adoption of the adapted version of VFT for design and subsequent manufacture of new products. Emc2's DOE SBIR Phase I effort successfully adapted VFT so that SMEs have access to this sophisticated and proven methodology that is quick, accurate and cost effective and available on-demand to address weld-simulation and fabrication problems prior to manufacture. The open source code, WARP3D, a high performance finite element code mainly used in fracture and damage assessment of structures, was modified so that computational weld problems can be solved efficiently on multiple processors and threads with VFT. The thermal solver for VFT, based on a series of closed form solution approximations, was enhanced for solution on multiple processors greatly increasing overall speed. In addition, the graphical user interface (GUI) has been tailored to integrate these solutions with WARP3D. The GUI is used to define all the weld pass descriptions, number of passes, material properties, consumable properties, weld speed, etc. for the structure to be modeled. The GUI was improved to make it user-friendly for engineers that are not experts in finite

  17. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  18. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  19. Computer modeling of the gyrocon

    International Nuclear Information System (INIS)

    Tallerico, P.J.; Rankin, J.E.

    1979-01-01

    A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency

  20. Computer - based modeling in extract sciences research -III ...

    African Journals Online (AJOL)

    Molecular modeling techniques have been of great applicability in the study of the biological sciences and other exact science fields like agriculture, mathematics, computer science and the like. In this write up, a list of computer programs for predicting, for instance, the structure of proteins has been provided. Discussions on ...

  1. Structure and modeling of turbulence

    International Nuclear Information System (INIS)

    Novikov, E.A.

    1995-01-01

    The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)

  2. Computer modelling of defect structure and rare earth doping in LiCaAlF sub 6 and LiSrAlF sub 6

    CERN Document Server

    Amaral, J B; Valerio, M E G; Jackson, R A

    2003-01-01

    This paper describes a computational study of the mixed metal fluorides LiCaAlF sub 6 and LiSrAlF sub 6 , which have potential technological applications when doped with a range of elements, especially those from the rare earth series. Potentials are derived to represent the structure and properties of the undoped materials, then defect properties are calculated, and finally solution energies for rare earth elements are calculated, enabling preferred dopant sites and charge compensation mechanisms to be predicted.

  3. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  4. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  5. Fast computation of the inverse CMH model

    Science.gov (United States)

    Patel, Umesh D.; Della Torre, Edward

    2001-12-01

    A fast computational method based on differential equation approach for inverse Della Torre, Oti, Kádár (DOK) model has been extended for the inverse Complete Moving Hysteresis (CMH) model. A cobweb technique for calculating the inverse CMH model is also presented. The two techniques differ from the point of view of flexibility, accuracy, and computation time. Simulation results of the inverse computation for both methods are presented.

  6. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.

  7. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  8. The role of computation in modeling evolution.

    Science.gov (United States)

    Way, E C

    2001-01-01

    Artificial life uses computers and formal systems to model and explore the dynamics of evolution. A fundamental question for A-Life concerns the role of computation in these models, and how we are to interpret computer implementations that do not have actual contact with physical systems. In this paper I will discuss how models are seen to explain and predict in philosophy of science and how these ideas apply to A-Life. I will also explore how the notion of an epistemic cut, as defined by H. Pattee, can be realized in a computational model with an artificial physical world.

  9. Computer Simulation of Sexual Selection on Age-Structured Populations

    Science.gov (United States)

    Martins, S. G. F.; Penna, T. J. P.

    Using computer simulations of a bit-string model for age-structured populations, we found that sexual selection of older males is advantageous, from an evolutionary point of view. These results are in opposition to a recent proposal of females choosing younger males. Our simulations are based on findings from recent studies of polygynous bird species. Since secondary sex characters are found mostly in males, we could make use of asexual populations that can be implemented in a fast and efficient way.

  10. Continuum and computational modeling of flexoelectricity

    Science.gov (United States)

    Mao, Sheng

    Flexoelectricity refers to the linear coupling of strain gradient and electric polarization. Early studies of this subject mostly look at liquid crystals and biomembranes. Recently, the advent of nanotechnology revealed its importance also in solid structures, such as flexible electronics, thin films, energy harvesters, etc. The energy storage function of a flexoelectric solid depends not only on polarization and strain, but also strain-gradient. This is our basis to formulate a consistent model of flexoelectric solids under small deformation. We derive a higher-order Navier equation for linear isotropic flexoelectric materials which resembles that of Mindlin in gradient elasticity. Closed-form solutions can be obtained for problems such as beam bending, pressurized tube, etc. Flexoelectric coupling can be enhanced in the vicinity of defects due to strong gradients and decay away in far field. We quantify this expectation by computing elastic and electric fields near different types of defects in flexoelectric solids. For point defects, we recover some well-known results of non-local theories. For dislocations, we make connections with experimental results on NaCl, ice, etc. For cracks, we perform a crack-tip asymptotic analysis and the results share features from gradient elasticity and piezoelectricity. We compute the J integral and use it for determining fracture criteria. Conventional finite element methods formulated solely on displacement are inadequate to treat flexoelectric solids due to higher order governing equations. Therefore, we introduce a mixed formulation which uses displacement and displacement-gradient as separate variables. Their known relation is constrained in a weighted integral sense. We derive a variational formulation for boundary value problems for piezeo- and/or flexoelectric solids. We validate this computational framework against exact solutions. With this method more complex problems, including a plate with an elliptical hole

  11. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  12. A computational model of the human hand 93-ERI-053

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  13. Assessing the effects of catch and release regulations on a quality adfluvial brook trout population using a computer based age-structure model

    Science.gov (United States)

    Risley, Casey A.L.; Zydlewski, Joseph D.

    2011-01-01

    Assessing the Effects of Catch-and-Release Regulations on a Brook Trout Population Using an Age-Structured Model: North American Journal of Fisheries Management: Vol 30, No 6 var _prum=[['id','54ff88bcabe53dc41d1004a5'],['mark','firstbyte',(new Date()).getTime()

  14. Logic and algebraic structures in quantum computing

    CERN Document Server

    Eskandarian, Ali; Harizanov, Valentina S

    2016-01-01

    Arising from a special session held at the 2010 North American Annual Meeting of the Association for Symbolic Logic, this volume is an international cross-disciplinary collaboration with contributions from leading experts exploring connections across their respective fields. Themes range from philosophical examination of the foundations of physics and quantum logic, to exploitations of the methods and structures of operator theory, category theory, and knot theory in an effort to gain insight into the fundamental questions in quantum theory and logic. The book will appeal to researchers and students working in related fields, including logicians, mathematicians, computer scientists, and physicists. A brief introduction provides essential background on quantum mechanics and category theory, which, together with a thematic selection of articles, may also serve as the basic material for a graduate course or seminar.

  15. Computational models of neurophysiological correlates of tinnitus.

    Science.gov (United States)

    Schaette, Roland; Kempter, Richard

    2012-01-01

    The understanding of tinnitus has progressed considerably in the past decade, but the details of the mechanisms that give rise to this phantom perception of sound without a corresponding acoustic stimulus have not yet been pinpointed. It is now clear that tinnitus is generated in the brain, not in the ear, and that it is correlated with pathologically altered spontaneous activity of neurons in the central auditory system. Both increased spontaneous firing rates and increased neuronal synchrony have been identified as putative neuronal correlates of phantom sounds in animal models, and both phenomena can be triggered by damage to the cochlea. Various mechanisms could underlie the generation of such aberrant activity. At the cellular level, decreased synaptic inhibition and increased neuronal excitability, which may be related to homeostatic plasticity, could lead to an over-amplification of natural spontaneous activity. At the network level, lateral inhibition could amplify differences in spontaneous activity, and structural changes such as reorganization of tonotopic maps could lead to self-sustained activity in recurrently connected neurons. However, it is difficult to disentangle the contributions of different mechanisms in experiments, especially since not all changes observed in animal models of tinnitus are necessarily related to tinnitus. Computational modeling presents an opportunity of evaluating these mechanisms and their relation to tinnitus. Here we review the computational models for the generation of neurophysiological correlates of tinnitus that have been proposed so far, and evaluate predictions and compare them to available data. We also assess the limits of their explanatory power, thus demonstrating where an understanding is still lacking and where further research may be needed. Identifying appropriate models is important for finding therapies, and we therefore, also summarize the implications of the models for approaches to treat tinnitus.

  16. Computational models of neurophysiological correlates of tinnitus

    Directory of Open Access Journals (Sweden)

    Roland eSchaette

    2012-05-01

    Full Text Available The understanding of tinnitus has progressed considerably in the past decade, but the details of the mechanisms that give rise to this phantom perception of sound without a corresponding acoustic stimulus have not been pinpointed yet. It is now clear that tinnitus is generated in the brain, not in the ear, and that it is correlated with pathologically altered spontaneous activity of neurons in the central auditory system. Both increased spontaneous firing rates and increased neuronal synchrony have been identified as putative neuronal correlates of phantom sounds in animal models, and both phenomena can be triggered by damage to the cochlea. Various mechanisms could underlie the generation of such aberrant activity. At the cellular level, decreased synaptic inhibition and increased neuronal excitability, which may be related to homeostatic plasticity, could lead to an over-amplification of natural spontaneous activity. At the network level, lateral inhibition could amplify differences in spontaneous activity, and structural changes such as reorganization of tonotopic maps could lead to self-sustained activity in recurrently connected neurons. It is difficult to disentangle the contributions of different mechanisms in experiments, especially since not all changes observed in animal models of tinnitus are necessarily related to tinnitus. Computational modelling presents an opportunity of evaluating these mechanisms and their relation to tinnitus. Here we review the computational models for the generation of neurophysiological correlates of tinnitus that have been proposed so far, evaluate predictions and compare them to available data. We also evaluate the limits of their explanatory power, thus demonstrating where an understanding is still lacking and where further research may be needed. Identifying appropriate models is important for finding therapies and we therefore also summarize the implications of the models for approaches to treat

  17. A computational domain decomposition approach for solving coupled flow-structure-thermal interaction problems

    OpenAIRE

    Eugenio Aulisa; Sandro Manservisi; Padmanabhan Seshaiyer

    2009-01-01

    Solving complex coupled processes involving fluid-structure-thermal interactions is a challenging problem in computational sciences and engineering. Currently there exist numerous public-domain and commercial codes available in the area of Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD) and Computational Thermodynamics (CTD). Different groups specializing in modelling individual process such as CSD, CFD, CTD often come together to solve a complex coupled ap...

  18. ECONGAS - model structure

    International Nuclear Information System (INIS)

    1997-01-01

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ''Internationalization and structural change in the gas market'' aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab

  19. ECONGAS - model structure

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ``Internationalization and structural change in the gas market`` aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab.

  20. Computation of Eigenmodes in Long and Complex Accelerating Structures by Means of Concatenation Strategies

    CERN Document Server

    Fligsen, T; Van Rienen, U

    2014-01-01

    The computation of eigenmodes for complex accelerating structures is a challenging and important task for the design and operation of particle accelerators. Discretizing long and complex structures to determine its eigenmodes leads to demanding computations typically performed on super computers. This contribution presents an application example of a method to compute eigenmodes and other parameters derived from these eigenmodes for long and complex structures using standard workstation computers. This is accomplished by the decomposition of the complex structure into several single segments. In a next step, the electromagnetic properties of the segments are described in terms of a compact state-space model. Subsequently, the state-space models of the single structures are concatenated to the full structure. The results of direct calculations are compared with results obtained by the concatenation scheme in terms of computational time and accuracy.

  1. Applications of computer modeling to fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

  2. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  3. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  5. Student Computer Use: Its Organizational Structure and Institutional Support.

    Science.gov (United States)

    Juska, Arunas; Paris, Arthur E.

    1993-01-01

    Examines the structure of undergraduate computing at a large private university, including patterns of use, impact of computer ownership and gender, and the bureaucratic structure in which usage is embedded. The profile of computer use uncovered in a survey is compared with reports offered by the institution and the trade press. (10 references)…

  6. Building a Structural Model: Parameterization and Structurality

    Directory of Open Access Journals (Sweden)

    Michel Mouchart

    2016-04-01

    Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.

  7. Modelling of data uncertainties on hybrid computers

    International Nuclear Information System (INIS)

    Schneider, Anke

    2016-06-01

    The codes d 3 f and r 3 t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d 3 f and r 3 t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d 3 f and r 3 t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d 3 f and r 3 t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d 3 f and r 3 t were combined to one conjoint code d 3 f++. A direct estimation of uncertainties for complex groundwater flow models with the help of Monte Carlo simulations will not be

  8. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  9. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  10. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  11. Computing Diverse Optimal Stable Models

    OpenAIRE

    Romero, Javier; Schaub, Torsten; Wanko, Philipp

    2016-01-01

    We introduce a comprehensive framework for computing diverse (or similar) solutions to logic programs with preferences. Our framework provides a wide spectrum of complete and incomplete methods for solving this task. Apart from proposing several new methods, it also accommodates existing ones and generalizes them to programs with preferences. Interestingly, this is accomplished by integrating and automating several basic ASP techniques - being of general interest even beyond diversification. ...

  12. Computer modeling of road bridge for simulation moving load

    Directory of Open Access Journals (Sweden)

    Miličić Ilija M.

    2016-01-01

    Full Text Available In this paper is shown computational modelling one span road structures truss bridge with the roadway on the upper belt of. Calculation models were treated as planar and spatial girders made up of 1D finite elements with applications for CAA: Tower and Bridge Designer 2016 (2nd Edition. The conducted computer simulations results are obtained for each comparison of the impact of moving load according to the recommendations of the two standards SRPS and AASHATO. Therefore, it is a variant of the bridge structure modeling application that provides Bridge Designer 2016 (2nd Edition identical modeled in an environment of Tower. As important information for the selection of a computer applications point out that the application Bridge Designer 2016 (2nd Edition we arent unable to treat the impacts moving load model under national standard - V600. .

  13. Parallel algorithms and archtectures for computational structural mechanics

    Science.gov (United States)

    Patrick, Merrell; Ma, Shing; Mahajan, Umesh

    1989-01-01

    The determination of the fundamental (lowest) natural vibration frequencies and associated mode shapes is a key step used to uncover and correct potential failures or problem areas in most complex structures. However, the computation time taken by finite element codes to evaluate these natural frequencies is significant, often the most computationally intensive part of structural analysis calculations. There is continuing need to reduce this computation time. This study addresses this need by developing methods for parallel computation.

  14. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  15. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  16. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  17. Computer-Aided Design of RNA Origami Structures.

    Science.gov (United States)

    Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S

    2017-01-01

    RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.

  18. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Garimella, Rao Veerabhadra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-31

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  19. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  20. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...

  1. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.

  2. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  3. Computer modeling of human decision making

    Science.gov (United States)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  4. Computational and Modeling Strategies for Cell Motility

    Science.gov (United States)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  5. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and

  6. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and the cell

  7. Computational modelling of SCC flow

    DEFF Research Database (Denmark)

    Geiker, Mette Rica; Thrane, Lars Nyholm; Szabo, Peter

    2005-01-01

    To benefit from the full potential of self-compacting concrete (SCC) prediction tools are needed for the form filling of SCC. Such tools should take into account the properties of the concrete, the shape and size of the structural element, the position of rebars, and the casting technique. Examples...

  8. A new epidemic model of computer viruses

    Science.gov (United States)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  9. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  10. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  11. Computer-Based Modeling Environments

    Science.gov (United States)

    1989-01-01

    1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as

  12. Computer modeling of flow induced in-reactor vibrations

    International Nuclear Information System (INIS)

    Turula, P.; Mulcahy, T.M.

    1977-01-01

    An assessment of the reliability of finite element method computer models, as applied to the computation of flow induced vibration response of components used in nuclear reactors, is presented. The prototype under consideration was the Fast Flux Test Facility reactor being constructed for US-ERDA. Data were available from an extensive test program which used a scale model simulating the hydraulic and structural characteristics of the prototype components, subjected to scaled prototypic flow conditions as well as to laboratory shaker excitations. Corresponding analytical solutions of the component vibration problems were obtained using the NASTRAN computer code. Modal analyses and response analyses were performed. The effect of the surrounding fluid was accounted for. Several possible forcing function definitions were considered. Results indicate that modal computations agree well with experimental data. Response amplitude comparisons are good only under conditions favorable to a clear definition of the structural and hydraulic properties affecting the component motion. 20 refs

  13. Structural disorder model

    International Nuclear Information System (INIS)

    Dixit, P.K.; Vaid, B.A.; Sharma, K.C.

    1986-01-01

    The structure disorder model, recently proposed to explain the thermodynamic properties near the transition of first order, is generalized to include the pressure-induced transitions in tetrahedrally coordinated tin and A/sup N/B/sup 8-N/ compounds (with N = 2, 3). For Sn the calculated values of the change in thermodynamic quantities during the transition are found to be closer to the experimental values. For A/sup N/B/sup 8-N/ compounds, the transition is explained in a satisfactory manner in terms of partial ionic bonds and covalent bonds. The change in compressibility near the transition is found to be in agreement with that obtained from experiments. (author)

  14. Giga-voxel computational morphogenesis for structural design

    Science.gov (United States)

    Aage, Niels; Andreassen, Erik; Lazarov, Boyan S.; Sigmund, Ole

    2017-10-01

    In the design of industrial products ranging from hearing aids to automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer-aided design approaches. The computer-aided approach known as topology optimization enables unrestricted design freedom and shows great promise with regard to weight savings, but its applicability has so far been limited to the design of single components or simple structures, owing to the resolution limits of current optimization methods. Here we report a computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution—more than two orders of magnitude higher than previously reported. Such resolution provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented structural detail at length scales ranging from tens of metres to millimetres and, intriguingly, shows remarkable similarity to naturally occurring bone structures in, for example, bird beaks. We estimate that our optimized design corresponds to a reduction in mass of 2–5 per cent compared to currently used aeroplane wing designs, which translates into a reduction in fuel consumption of about 40–200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems, antennas, nano-optics and micro-systems.

  15. Diazole-based powdered cocrystal featuring a helical hydrogen-bonded network: structure determination from PXRD, solid-state NMR and computer modeling.

    Science.gov (United States)

    Sardo, Mariana; Santos, Sérgio M; Babaryk, Artem A; López, Concepción; Alkorta, Ibon; Elguero, José; Claramunt, Rosa M; Mafra, Luís

    2015-02-01

    We present the structure of a new equimolar 1:1 cocrystal formed by 3,5-dimethyl-1H-pyrazole (dmpz) and 4,5-dimethyl-1H-imidazole (dmim), determined by means of powder X-ray diffraction data combined with solid-state NMR that provided insight into topological details of hydrogen bonding connectivities and weak interactions such as CH···π contacts. The use of various 1D/2D (13)C, (15)N and (1)H high-resolution solid-state NMR techniques provided structural insight on local length scales revealing internuclear proximities and relative orientations between the dmim and dmpz molecular building blocks of the studied cocrystal. Molecular modeling and DFT calculations were also employed to generate meaningful structures. DFT refinement was able to decrease the figure of merit R(F(2)) from ~11% (PXRD only) to 5.4%. An attempt was made to rationalize the role of NH···N and CH···π contacts in stabilizing the reported cocrystal. For this purpose four imidazole derivatives with distinct placement of methyl substituents were reacted with dmpz to understand the effect of methylation in blocking or enabling certain intermolecular contacts. Only one imidazole derivative (dmim) was able to incorporate into the dmpz trimeric motif thus resulting in a cocrystal, which contains both hydrophobic (methyl groups) and hydrophilic components that self-assemble to form an atypical 1D network of helicoidal hydrogen bonded pattern, featuring structural similarities with alpha-helix arrangements in proteins. The 1:1 dmpz···dmim compound I is the first example of a cocrystal formed by two different azoles. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  17. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  18. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  19. Integrated materials–structural models

    DEFF Research Database (Denmark)

    Stang, Henrik; Geiker, Mette Rica

    2008-01-01

    of structural modelling and materials concepts will both operational in both identifying important research issues and in answering the ‘real’ needs of society. Integrated materials-structural models will allow synergy to develop between materials and structural research. On one side the structural modelling......Reliable service life models for load carrying structures are significant elements in the evaluation of the performance and sustainability of existing and new structures. Furthermore, reliable service life models are prerequisites for the evaluation of the sustainability of maintenance strategies...... should define a framework in which materials research results eventually should fit in and on the other side the materials research should define needs and capabilities in structural modelling. Integrated materials-structural models of a general nature are almost non-existent in the field of cement based...

  20. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  1. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.

  2. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,

  3. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  4. Model to Implement Virtual Computing Labs via Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Washington Luna Encalada

    2017-07-01

    Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning

  5. Computing the stresses and deformations of the human eye components due to a high explosive detonation using fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Navidbakhsh, Mahdi; Sera, Toshihiro; Kudo, Susumu

    2016-05-01

    In spite the fact that a very small human body surface area is comprised by the eye, its wounds due to detonation have recently been dramatically amplified. Although many efforts have been devoted to measure injury of the globe, there is still a lack of knowledge on the injury mechanism due to Primary Blast Wave (PBW). The goal of this study was to determine the stresses and deformations of the human eye components, including the cornea, aqueous, iris, ciliary body, lens, vitreous, retina, sclera, optic nerve, and muscles, attributed to PBW induced by trinitrotoluene (TNT) explosion via a Lagrangian-Eulerian computational coupling model. Magnetic Resonance Imaging (MRI) was employed to establish a Finite Element (FE) model of the human eye according to a normal human eye. The solid components of the eye were modelled as Lagrangian mesh, while an explosive TNT, air domain, and aqueous were modelled using Arbitrary Lagrangian-Eulerian (ALE) mesh. Nonlinear dynamic FE simulations were accomplished using the explicit FE code, namely LS-DYNA. In order to simulate the blast wave generation, propagation, and interaction with the eye, the ALE formulation with Jones-Wilkins-Lee (JWL) equation defining the explosive material were employed. The results revealed a peak stress of 135.70kPa brought about by detonation upsurge on the cornea at the distance of 25cm. The highest von Mises stresses were observed on the sclera (267.3kPa), whereas the lowest one was seen on the vitreous body (0.002kPa). The results also showed a relatively high resultant displacement for the macula as well as a high variation for the radius of curvature for the cornea and lens, which can result in both macular holes, optic nerve damage and, consequently, vision loss. These results may have implications not only for understanding the value of stresses and strains in the human eye components but also giving an outlook about the process of PBW triggers damage to the eye. Copyright © 2016 Elsevier Ltd

  6. Computational modeling of epiphany learning.

    Science.gov (United States)

    Chen, Wei James; Krajbich, Ian

    2017-05-02

    Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

  7. 3D Computational Modeling of Proteins Using Sparse Paramagnetic NMR Data.

    Science.gov (United States)

    Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas

    2017-01-01

    Computational modeling of proteins using evolutionary or de novo approaches offers rapid structural characterization, but often suffers from low success rates in generating high quality models comparable to the accuracy of structures observed in X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. A computational/experimental hybrid approach incorporating sparse experimental restraints in computational modeling algorithms drastically improves reliability and accuracy of 3D models. This chapter discusses the use of structural information obtained from various paramagnetic NMR measurements and demonstrates computational algorithms implementing pseudocontact shifts as restraints to determine the structure of proteins at atomic resolution.

  8. 3D-DART: a DNA structure modelling server

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2009-01-01

    There is a growing interest in structural studies of DNA by both experimental and computational approaches. Often, 3D-structural models of DNA are required, for instance, to serve as templates for homology modeling, as starting structures for macro-molecular docking or as scaffold for NMR structure

  9. A first course in structural equation modeling

    CERN Document Server

    Raykov, Tenko

    2012-01-01

    In this book, authors Tenko Raykov and George A. Marcoulides introduce students to the basics of structural equation modeling (SEM) through a conceptual, nonmathematical approach. For ease of understanding, the few mathematical formulas presented are used in a conceptual or illustrative nature, rather than a computational one.Featuring examples from EQS, LISREL, and Mplus, A First Course in Structural Equation Modeling is an excellent beginner's guide to learning how to set up input files to fit the most commonly used types of structural equation models with these programs. The basic ideas and methods for conducting SEM are independent of any particular software.Highlights of the Second Edition include: Review of latent change (growth) analysis models at an introductory level Coverage of the popular Mplus program Updated examples of LISREL and EQS A CD that contains all of the text's LISREL, EQS, and Mplus examples.A First Course in Structural Equation Modeling is intended as an introductory book for students...

  10. Structural biology computing: Lessons for the biomedical research sciences.

    Science.gov (United States)

    Morin, Andrew; Sliz, Piotr

    2013-11-01

    The field of structural biology, whose aim is to elucidate the molecular and atomic structures of biological macromolecules, has long been at the forefront of biomedical sciences in adopting and developing computational research methods. Operating at the intersection between biophysics, biochemistry, and molecular biology, structural biology's growth into a foundational framework on which many concepts and findings of molecular biology are interpreted1 has depended largely on parallel advancements in computational tools and techniques. Without these computing advances, modern structural biology would likely have remained an exclusive pursuit practiced by few, and not become the widely practiced, foundational field it is today. As other areas of biomedical research increasingly embrace research computing techniques, the successes, failures and lessons of structural biology computing can serve as a useful guide to progress in other biomedically related research fields. Copyright © 2013 Wiley Periodicals, Inc.

  11. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  12. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  13. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  14. Giga-voxel computational morphogenesis for structural design

    DEFF Research Database (Denmark)

    Aage, Niels; Andreassen, Erik; Lazarov, Boyan Stefanov

    2017-01-01

    In the design of industrial products ranging from hearing aidsto automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer...... of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented...... aeroplane wing designs, which translates into are duction in fuel consumption of about 40–200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems3, antennas4,nano-optics5 and micro-systems6,7...

  15. Computational architecture for integrated controls and structures design

    Science.gov (United States)

    Belvin, W. Keith; Park, K. C.

    1989-01-01

    To facilitate the development of control structure interaction (CSI) design methodology, a computational architecture for interdisciplinary design of active structures is presented. The emphasis of the computational procedure is to exploit existing sparse matrix structural analysis techniques, in-core data transfer with control synthesis programs, and versatility in the optimization methodology to avoid unnecessary structural or control calculations. The architecture is designed such that all required structure, control and optimization analyses are performed within one program. Hence, the optimization strategy is not unduly constrained by cold starts of existing structural analysis and control synthesis packages.

  16. Models of neuromodulation for computational psychiatry.

    Science.gov (United States)

    Iglesias, Sandra; Tomiello, Sara; Schneebeli, Maya; Stephan, Klaas E

    2017-05-01

    Psychiatry faces fundamental challenges: based on a syndrome-based nosology, it presently lacks clinical tests to infer on disease processes that cause symptoms of individual patients and must resort to trial-and-error treatment strategies. These challenges have fueled the recent emergence of a novel field-computational psychiatry-that strives for mathematical models of disease processes at physiological and computational (information processing) levels. This review is motivated by one particular goal of computational psychiatry: the development of 'computational assays' that can be applied to behavioral or neuroimaging data from individual patients and support differential diagnosis and guiding patient-specific treatment. Because the majority of available pharmacotherapeutic approaches in psychiatry target neuromodulatory transmitters, models that infer (patho)physiological and (patho)computational actions of different neuromodulatory transmitters are of central interest for computational psychiatry. This article reviews the (many) outstanding questions on the computational roles of neuromodulators (dopamine, acetylcholine, serotonin, and noradrenaline), outlines available evidence, and discusses promises and pitfalls in translating these findings to clinical applications. WIREs Cogn Sci 2017, 8:e1420. doi: 10.1002/wcs.1420 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  17. EWE: A computer model for ultrasonic inspection

    Science.gov (United States)

    Douglas, S. R.; Chaplin, K. R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It was applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues.

  18. Ewe: a computer model for ultrasonic inspection

    International Nuclear Information System (INIS)

    Douglas, S.R.; Chaplin, K.R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues

  19. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  20. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  1. Structural dynamic modifications via models

    Indian Academy of Sciences (India)

    of structural dynamic optimization techniques. A review of structural optimization in vibratory environments is given by Rao (1989). 2. SDM techniques. SDM methods may be broadly divided into two groups. Those which employ a model of the structure and those that use dynamic test data directly. The model used by the ...

  2. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  3. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  4. A parallel computational model for GATE simulations.

    Science.gov (United States)

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  6. A Simple Gauss-Newton Procedure for Covariance Structure Analysis with High-Level Computer Languages.

    Science.gov (United States)

    Cudeck, Robert; And Others

    1993-01-01

    An implementation of the Gauss-Newton algorithm for the analysis of covariance structure that is specifically adapted for high-level computer languages is reviewed. This simple method for estimating structural equation models is useful for a variety of standard models, as is illustrated. (SLD)

  7. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet Publ...

  8. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  9. Structural Composites Corrosive Management by Computational Simulation

    Science.gov (United States)

    Chamis, Christos C.; Minnetyan, Levon

    2006-01-01

    A simulation of corrosive management on polymer composites durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured Ph factor and is represented by voids, temperature, and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure, and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply managed degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.

  10. The Structure of the Computer Industry

    Science.gov (United States)

    1992-03-01

    34alleging that their single-in-line-memory-modules, known in the industry as SIMMs, violated patents it (Wang) was granted in 1987." (The Wall Street Journal , August...DEC), the nation’s second largest computer maker, and again focused on competition in the desktop market of the future ( Wall Street Journal Staff...between Pcs and workstations and keeping Microsoft’s application software market booming ( Wall Street Journal Staff, December 3, 1991, B4). Gates is

  11. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  12. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  13. Standard problems for structural computer codes

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.

    1985-01-01

    BNL is investigating the ranges of validity of the analytical methods used to predict the behavior of nuclear safety related structures under accidental and extreme environmental loadings. During FY 85, the investigations were concentrated on special problems that can significantly influence the outcome of the soil structure interaction evaluation process. Specially, limitations and applicability of the standard interaction methods when dealing with lift-off, layering and water table effects, were investigated. This paper describes the work and the results obtained during FY 85 from the studies on lift-off, layering and water-table effects in soil-structure interaction

  14. Synthesis of computational structures for analog signal processing

    CERN Document Server

    Popa, Cosmin Radu

    2011-01-01

    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  15. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  16. HYPERCOMPOSITIONAL STRUCTURES FROM THE COMPUTER THEORY

    Directory of Open Access Journals (Sweden)

    Geronimos G. Massouros

    1999-02-01

    Full Text Available Abstract This paper presents the several types of hypercompositional structures that have been introduced and used for the approach and solution of problems in the theory of languages and automata.

  17. Analisis Model Manajemen Insiden Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anggi Sukamto

    2015-05-01

    Full Text Available Dukungan teknologi informasi yang diterapkan oleh organisasi membutuhkan suatu manajemen agar penggunaannya dapat memenuhi tujuan penerapan teknologi tersebut. Salah satu kerangka kerja manajemen layanan teknologi informasi yang dapat diadopsi oleh organisasi adalah Information Technology Infrastructure Library (ITIL. Dukungan layanan (service support merupakan bagian dari proses ITIL. Pada umumnya, aktivitas dukungan layanan dilaksanakan dengan penggunaan teknologi yang dapat diakses melalui internet. Kondisi tersebut mengarah pada suatu konsep cloud computing. Cloud computing memungkinkan suatu instansi atau perusahaan untuk bisa mengatur sumber daya melalui jaringan internet. Fokus penelitian ini adalah menganalisis proses dan pelaku yang terlibat dalam dukungan layanan khususnya pada proses manajemen insiden, serta mengidentifikasi potensi penyerahan pelaku ke bentuk layanan cloud computing. Berdasarkan analisis yang dilakukan maka usulan model manajemen insiden berbasis cloud ini dapat diterapkan dalam suatu organisasi yang telah menggunakan teknologi komputer untuk mendukung kegiatan operasional. Kata Kunci—Cloud computing, ITIL, Manajemen Insiden, Service Support, Service Desk.

  18. Computational modelling of cellular level metabolism

    International Nuclear Information System (INIS)

    Calvetti, D; Heino, J; Somersalo, E

    2008-01-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented

  19. Computation of Hyperbolic Structures in Knot Theory

    OpenAIRE

    Weeks, Jeffrey R.

    2003-01-01

    This chapter from the upcoming Handbook of Knot Theory (eds. Menasco and Thistlethwaite) shows how to construct hyperbolic structures on link complements and perform hyperbolic Dehn filling. Along with a new elementary exposition of the standard ideas from Thurston's work, the article includes never-before-published explanations of SnapPea's algorithms for triangulating a link complement efficiently and for converging quickly to the hyperbolic structure while avoiding singularities in the par...

  20. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    pricing factors using the sequential regression approach. Our findings suggest that the two models largely provide the same in-sample fit, but loadings from ordinary and risk-adjusted Campbell-Shiller regressions are generally best matched by the shadow rate models. We also find that the shadow rate...... models perform better than the QTSMs when forecasting bond yields out of sample....

  1. Modeling protein structures: construction and their applications.

    Science.gov (United States)

    Ring, C S; Cohen, F E

    1993-06-01

    Although no general solution to the protein folding problem exists, the three-dimensional structures of proteins are being successfully predicted when experimentally derived constraints are used in conjunction with heuristic methods. In the case of interleukin-4, mutagenesis data and CD spectroscopy were instrumental in the accurate assignment of secondary structure. In addition, the tertiary structure was highly constrained by six cysteines separated by many residues that formed three disulfide bridges. Although the correct structure was a member of a short list of plausible structures, the "best" structure was the topological enantiomer of the experimentally determined conformation. For many proteases, other experimentally derived structures can be used as templates to identify the secondary structure elements. In a procedure called modeling by homology, the structure of a known protein is used as a scaffold to predict the structure of another related protein. This method has been used to model a serine and a cysteine protease that are important in the schistosome and malarial life cycles, respectively. The model structures were then used to identify putative small molecule enzyme inhibitors computationally. Experiments confirm that some of these nonpeptidic compounds are active at concentrations of less than 10 microM.

  2. DFT computations of the lattice constant, stable atomic structure and ...

    African Journals Online (AJOL)

    This paper presents the most stable atomic structure and lattice constant of Fullerenes (C60). FHI-aims DFT code was used to predict the stable structure and the computational lattice constant of C60. These were compared with known experimental structures and lattice constants of C60. The results obtained showed that ...

  3. Modeling Structural Brain Connectivity

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø

    The human brain consists of a gigantic complex network of interconnected neurons. Together all these connections determine who we are, how we react and how we interpret the world. Knowledge about how the brain is connected can further our understanding of the brain’s structural organization, help...... improve diagnosis, and potentially allow better treatment of a wide range of neurological disorders. Tractography based on diffusion magnetic resonance imaging is a unique tool to estimate this “structural connectivity” of the brain non-invasively and in vivo. During the last decade, brain connectivity...... has increasingly been analyzed using graph theoretic measures adopted from network science and this characterization of the brain’s structural connectivity has been shown to be useful for the classification of populations, such as healthy and diseased subjects. The structural connectivity of the brain...

  4. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    Science.gov (United States)

    Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)

    1994-01-01

    In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.

  5. Oscillating water column structural model

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  6. Description of mathematical models and computer programs

    International Nuclear Information System (INIS)

    1977-01-01

    The paper gives a description of mathematical models and computer programs for analysing possible strategies for spent fuel management, with emphasis on economic analysis. The computer programs developed, describe the material flows, facility construction schedules, capital investment schedules and operating costs for the facilities used in managing the spent fuel. The computer programs use a combination of simulation and optimization procedures for the economic analyses. Many of the fuel cycle steps (such as spent fuel discharges, storage at the reactor, and transport to the RFCC) are described in physical and economic terms through simulation modeling, while others (such as reprocessing plant size and commissioning schedules, interim storage facility commissioning schedules etc.) are subjected to economic optimization procedures to determine the approximate lowest-cost plans from among the available feasible alternatives

  7. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  8. Experimental and computational study of thaumasite structure

    Energy Technology Data Exchange (ETDEWEB)

    Scholtzová, Eva, E-mail: Eva.Scholtzova@savba.sk [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Kucková, Lenka; Kožíšek, Jozef [Department of Physical Chemistry, Institute of Physical Chemistry and Chemical Physics, Faculty of Chemical and Food Technology, Slovak University of Technology in Bratislava, Radlinského 9, 812 37 Bratislava (Slovakia); Pálková, Helena [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Tunega, Daniel [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, 845 36 Bratislava (Slovakia); Institute for Soil Science, University of Natural Resources and Life Sciences, Peter-Jordanstrasse 82, A-1190 Wien (Austria)

    2014-05-01

    The structure of thaumasite has been studied experimentally by means of a single crystal X-ray diffraction and FTIR methods, and theoretically using density functional theory (DFT) method. Very good agreement was achieved between calculated and experimental structural parameters. In addition, calculations offered the refinement of the positions of the hydrogen atoms. The detailed analysis of the hydrogen bonds existing in the thaumasite structure has been performed. Several types of hydrogen bonds have been classified. The water molecules coordinating Ca{sup 2+} cation act as proton donors in moderate O-H···O hydrogen bonds formed with CO₃⁻²and SO₄⁻² anions. The multiple O-H···O hydrogen bonds exist among water molecules themselves. Finally, relatively weak hydrogen bonds form water molecules with the OH groups from the coordination sphere of the Si(OH)₆⁻² anion. Further, calculated vibrational spectrum allowed complete assignment of all vibrational modes which are not available from the experimental spectrum that has a complex structure with overlapped bands, especially below 1500 cm⁻¹. Highlights: • The thaumasite structure was studied experimentally and using DFT method. • We used DFT method for the refinement of the positions of hydrogen atoms. • A detailed analysis of the hydrogen bonds was done. • A complete assignment of all bands to particular types of vibrations was done.

  9. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  10. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  11. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  12. Model Checking - Automated Verification of Computational Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. A Stochastic Dynamic Model of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2012-01-01

    Full Text Available A stochastic computer virus spread model is proposed and its dynamic behavior is fully investigated. Specifically, we prove the existence and uniqueness of positive solutions, and the stability of the virus-free equilibrium and viral equilibrium by constructing Lyapunov functions and applying Ito's formula. Some numerical simulations are finally given to illustrate our main results.

  14. Computational modelling for dry-powder inhalers

    NARCIS (Netherlands)

    Kröger, Ralf; Woolhouse, Robert; Becker, Michael; Wachtel, Herbert; de Boer, Anne; Horner, Marc

    2012-01-01

    Computational fluid dynamics (CFD) is a simulation tool used for modelling powder flow through inhalers to allow optimisation both of device design and drug powder. Here, Ralf Kröger, Consulting Senior CFD Engineer, ANSYS Germany GmbH; Marc Horner, Lead Technical Services Engineer, Healthcare,

  15. A Framework for Hybrid Computational Models

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2003-01-01

    Roč. 2, č. 4 (2003), s. 868-873 ISSN 1109-2750 R&D Projects: GA ČR(CZ) GA526/03/Z042; GA ČR(CZ) GA201/01/1192 Institutional research plan: CEZ:AV0Z1030915 Keywords : multi-agent systems * hybrid computational models Subject RIV: BA - General Mathematics

  16. Towards a Computational Model of Sketching

    National Research Council Canada - National Science Library

    Forbus, Kenneth D; Ferguson, Ronald W; Usher, Jeffrey M

    2000-01-01

    .... They then describe four dimensions of sketching -- visual understanding, conceptual understanding, language understanding, and drawing -- that can be used to characterize the competence of existing systems and identify open problems. Three research challenges are posed to serve as milestones towards a computational model of sketching that can explain and replicate human abilities in this area.

  17. A Computational Model of Fraction Arithmetic

    Science.gov (United States)

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  18. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  19. Images as a basis for computer modelling

    Science.gov (United States)

    Beaufils, D.; LeTouzé, J.-C.; Blondel, F.-M.

    1994-03-01

    New computer technologies such as the graphics data tablet, video digitization and numerical methods, can be used for measurement and mathematical modelling in physics. Two programs dealing with newtonian mechanics and some of related scientific activities for A-level students are described.

  20. Computer Modelling of Photochemical Smog Formation

    Science.gov (United States)

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  1. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  2. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  3. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  4. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    Directory of Open Access Journals (Sweden)

    Zdenko Lončarić

    2009-12-01

    Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce

  5. Structural Post-optimisation of a computationally designed Plywood Gridshell

    DEFF Research Database (Denmark)

    Lafuente Hernández, Elisa; Tamke, Martin; Gengnagel, Christoph

    2012-01-01

    Computational design is being commonly used for the exploration of new geometries and systemsin architecture. Complex parametric definitions allow not only spatial shaping but also theintegration of material simulation and afterwards robotic fabrication. Nevertheless a structurally-efficient design...

  6. Computational modeling of synthetic microbial biofilms.

    Science.gov (United States)

    Rudge, Timothy J; Steiner, Paul J; Phillips, Andrew; Haseloff, Jim

    2012-08-17

    Microbial biofilms are complex, self-organized communities of bacteria, which employ physiological cooperation and spatial organization to increase both their metabolic efficiency and their resistance to changes in their local environment. These properties make biofilms an attractive target for engineering, particularly for the production of chemicals such as pharmaceutical ingredients or biofuels, with the potential to significantly improve yields and lower maintenance costs. Biofilms are also a major cause of persistent infection, and a better understanding of their organization could lead to new strategies for their disruption. Despite this potential, the design of synthetic biofilms remains a major challenge, due to the complex interplay between transcriptional regulation, intercellular signaling, and cell biophysics. Computational modeling could help to address this challenge by predicting the behavior of synthetic biofilms prior to their construction; however, multiscale modeling has so far not been achieved for realistic cell numbers. This paper presents a computational method for modeling synthetic microbial biofilms, which combines three-dimensional biophysical models of individual cells with models of genetic regulation and intercellular signaling. The method is implemented as a software tool (CellModeller), which uses parallel Graphics Processing Unit architectures to scale to more than 30,000 cells, typical of a 100 μm diameter colony, in 30 min of computation time.

  7. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present pro...... probabilistic model for these basic properties is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...

  8. A Fuzzy Petri Nets Model for Computing With Words

    OpenAIRE

    Cao, Yongzhi; Chen, Guoqing

    2009-01-01

    Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words--fuzzy Petri nets for computing with words (FPNCWs). The new feature of ...

  9. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhar [Univ. of Pittsburgh, PA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.

  10. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhur [Univ. of California, Berkeley, CA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity

  11. SPAR Model Structural Efficiencies

    Energy Technology Data Exchange (ETDEWEB)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  12. Interactive computer graphics and its role in control system design of large space structures

    Science.gov (United States)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  13. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  14. Computing specified generators of structured matrix inverses

    OpenAIRE

    Jeannerod , Claude-Pierre; Mouilleron , Christophe

    2010-01-01

    International audience; The asymptotically fastest known divide-and-conquer methods for inverting dense structured matrices are essentially variations or extensions of the Morf/Bitmead-Anderson algorithm. Most of them must deal with the growth in length of intermediate generators, and this is done by incorporating various generator compression techniques into the algorithms. One exception is an algorithm by Cardinal, which in the particular case of Cauchy-like matrices avoids such growth by f...

  15. Computer Model Of Fragmentation Of Atomic Nuclei

    Science.gov (United States)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  16. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    OpenAIRE

    Lončarić, Zdenko; Vukobratović, Marija; Ragaly, Peter; Filep, Tibor; Popović, Brigita; Karalić, Krunoslav; Vukobratović, Želimir

    2009-01-01

    Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media) evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of ...

  17. Queuing theory models for computer networks

    Science.gov (United States)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  18. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  19. Fast loop modeling for protein structures

    Science.gov (United States)

    Zhang, Jiong; Nguyen, Son; Shang, Yi; Xu, Dong; Kosztin, Ioan

    2015-03-01

    X-ray crystallography is the main method for determining 3D protein structures. In many cases, however, flexible loop regions of proteins cannot be resolved by this approach. This leads to incomplete structures in the protein data bank, preventing further computational study and analysis of these proteins. For instance, all-atom molecular dynamics (MD) simulation studies of structure-function relationship require complete protein structures. To address this shortcoming, we have developed and implemented an efficient computational method for building missing protein loops. The method is database driven and uses deep learning and multi-dimensional scaling algorithms. We have implemented the method as a simple stand-alone program, which can also be used as a plugin in existing molecular modeling software, e.g., VMD. The quality and stability of the generated structures are assessed and tested via energy scoring functions and by equilibrium MD simulations. The proposed method can also be used in template-based protein structure prediction. Work supported by the National Institutes of Health [R01 GM100701]. Computer time was provided by the University of Missouri Bioinformatics Consortium.

  20. Computational Aerodynamic Modeling of Small Quadcopter Vehicles

    Science.gov (United States)

    Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.

    2017-01-01

    High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.

  1. Evolutionary triplet models of structured RNA.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    2009-08-01

    Full Text Available The reconstruction and synthesis of ancestral RNAs is a feasible goal for paleogenetics. This will require new bioinformatics methods, including a robust statistical framework for reconstructing histories of substitutions, indels and structural changes. We describe a "transducer composition" algorithm for extending pairwise probabilistic models of RNA structural evolution to models of multiple sequences related by a phylogenetic tree. This algorithm draws on formal models of computational linguistics as well as the 1985 protosequence algorithm of David Sankoff. The output of the composition algorithm is a multiple-sequence stochastic context-free grammar. We describe dynamic programming algorithms, which are robust to null cycles and empty bifurcations, for parsing this grammar. Example applications include structural alignment of non-coding RNAs, propagation of structural information from an experimentally-characterized sequence to its homologs, and inference of the ancestral structure of a set of diverged RNAs. We implemented the above algorithms for a simple model of pairwise RNA structural evolution; in particular, the algorithms for maximum likelihood (ML alignment of three known RNA structures and a known phylogeny and inference of the common ancestral structure. We compared this ML algorithm to a variety of related, but simpler, techniques, including ML alignment algorithms for simpler models that omitted various aspects of the full model and also a posterior-decoding alignment algorithm for one of the simpler models. In our tests, incorporation of basepair structure was the most important factor for accurate alignment inference; appropriate use of posterior-decoding was next; and fine details of the model were least important. Posterior-decoding heuristics can be substantially faster than exact phylogenetic inference, so this motivates the use of sum-over-pairs heuristics where possible (and approximate sum-over-pairs. For more exact

  2. Analysis of a Model for Computer Virus Transmission

    OpenAIRE

    Qin, Peng

    2015-01-01

    Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our t...

  3. Molecular Sieve Bench Testing and Computer Modeling

    Science.gov (United States)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  4. Magnetohydrodynamic modeling of coronal structure and expansion

    Science.gov (United States)

    Suess, S. T.

    1983-01-01

    The presence of a magnetic field in the corona adds structure to the solar wind and almost certainly plays an important role in the energetics of the flow. Analytical and numerical modeling of gas-magnetic field interactions as used to compute steady, global flow are discussed. The approach used in, and results from a recent global model (Steinolfson, Suess and Wu, 1982) are discussed. Ideas on the most effective ways to improve the physical content and numerical efficiency of these models are outlined. Solutions of the MHD equations are discussed only in order to find steady-state flows, even though this often entails solving time-dependent equations.

  5. Structure of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Langacker, Paul [Pennsylvania Univ., PA (United States). Dept. of Physics

    1996-07-01

    This lecture presents the structure of the standard model, approaching the following aspects: the standard model Lagrangian, spontaneous symmetry breaking, gauge interactions, covering charged currents, quantum electrodynamics, the neutral current and gauge self-interactions, and problems with the standard model, such as gauge, fermion, Higgs and hierarchy, strong C P and graviton problems.

  6. Computation of the Structure Factor of Some Transition Liquid Metals

    African Journals Online (AJOL)

    Applying the solution of the Percus-Yevic equation to a one component hard sphere system and using the recently developed potential for liquid transition liquid metals, the structure factor of transition liquid metals were computed. The peak height and peak position of the structure factor of the liquid metals were studied.

  7. Computer-aided visualization of database structural relationships

    International Nuclear Information System (INIS)

    Cahn, D.F.

    1980-04-01

    Interactive computer graphic displays can be extremely useful in augmenting understandability of data structures. In complexly interrelated domains such as bibliographic thesauri and energy information systems, node and link displays represent one such tool. This paper presents examples of data structure representations found useful in these domains and discusses some of their generalizable components. 2 figures

  8. NASA CST aids U.S. industry. [computational structures technology

    Science.gov (United States)

    Housner, Jerry M.; Pinson, Larry D.

    1993-01-01

    The effect of NASA's computational structures Technology (CST) research on aerospace vehicle design and operation is discussed. The application of this research to proposed version of a high-speed civil transport, to composite structures in aerospace, to the study of crack growth, and to resolving field problems is addressed.

  9. A Multiscale Computational Model for Predicting Damage Evolution in Viscoelastic Composites Subjected to Impact Loading

    National Research Council Canada - National Science Library

    Reddy, J. N

    2005-01-01

    ... structures subjected to ballistic impact. The model is three dimensional and computational in nature, utilizing the finite element method, and this model is being implemented to the explicit code DYNA3D...

  10. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  11. Computing the Ediz eccentric connectivity index of discrete dynamic structures

    Science.gov (United States)

    Wu, Hualong; Kamran Siddiqui, Muhammad; Zhao, Bo; Gan, Jianhou; Gao, Wei

    2017-06-01

    From the earlier studies in physical and chemical sciences, it is found that the physico-chemical characteristics of chemical compounds are internally connected with their molecular structures. As a theoretical basis, it provides a new way of thinking by analyzing the molecular structure of the compounds to understand their physical and chemical properties. In our article, we study the physico-chemical properties of certain molecular structures via computing the Ediz eccentric connectivity index from mathematical standpoint. The results we yielded mainly apply to the techniques of distance and degree computation of mathematical derivation, and the conclusions have guiding significance in physical engineering.

  12. Generative models for chemical structures.

    Science.gov (United States)

    White, David; Wilson, Richard C

    2010-07-26

    We apply recently developed techniques for pattern recognition to construct a generative model for chemical structure. This approach can be viewed as ligand-based de novo design. We construct a statistical model describing the structural variations present in a set of molecules which may be sampled to generate new structurally similar examples. We prevent the possibility of generating chemically invalid molecules, according to our implicit hydrogen model, by projecting samples onto the nearest chemically valid molecule. By populating the input set with molecules that are active against a target, we show how new molecules may be generated that will likely also be active against the target.

  13. Computational modelling of acetabular cup migration

    Czech Academy of Sciences Publication Activity Database

    Jírová, Jitka; Micka, Michal; Jíra, Josef; Sosna, A.; Pokorný, D.

    2003-01-01

    Roč. 5, Supplement (2003), s. 218-223 ISSN 1509-409X. [International Conference Biomechanics 2003. Poznaň, Polsko, 24.09.2003-26.09.2003] R&D Projects: GA ČR GA103/00/0831; GA ČR GA106/01/0535 Institutional research plan: CEZ:AV0Z2071913; CEZ:MSM 21200025 Keywords : Orthopaedics * pelvis * computational modelling Subject RIV: FI - Traumatology, Orthopedics

  14. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  15. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  16. Bayesian Computational Sensor Networks for Aircraft Structural Health Monitoring

    Science.gov (United States)

    2016-02-02

    AFRL-AFOSR-VA-TR-2016-0094 Bayesian Computational Sensor Networks for Aircraft Structural Health Monitoring. Thomas Henderson UNIVERSITY OF UTAH SALT ...Adams Grant Number: FA9550-12-1-0291 AFOSR PI: Dr. Frederica Darema 25 January 2016 University of Utah, Salt lake City UT 84112 Executive Summary...samples provided by a sensor network. This approach was applied to the aircraft structural health monitoring problem. Structural health monitoring

  17. Mechanical Modelling and Computational Issues in Civil Engineering

    OpenAIRE

    FREMOND, M; MACERI, F

    2005-01-01

    In this edited book various novel approaches to problems of modern civil engineering are demonstrated. Experts associated within the Lagrange Laboratory present recent research results in civil engineering dealing both with modelling and computational aspects. Many modern topics are covered, such as monumental dams, soil mechanics and geotechnics, granular media, contact and friction problems, damage and fracture, new structural materials, and vibration damping -presenting the state of the ar...

  18. The deterministic computational modelling of radioactivity

    International Nuclear Information System (INIS)

    Damasceno, Ralf M.; Barros, Ricardo C.

    2009-01-01

    This paper describes a computational applicative (software) that modelling the simply radioactive decay, the stable nuclei decay, and tbe chain decay directly coupled with superior limit of thirteen radioactive decays, and a internal data bank with the decay constants of the various existent decays, facilitating considerably the use of program by people who does not have access to the program are not connected to the nuclear area; this makes access of the program to people that do not have acknowledgment of that area. The paper presents numerical results for typical problem-models

  19. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  20. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  1. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  2. Interpreting USANS intensities from computer generated fractal structures

    International Nuclear Information System (INIS)

    Bertram, W.K.

    2003-01-01

    Full text: Recent developments in the technique of high resolution Ultra Small Angle Neutron Scattering (USANS) have made this an important tool for investigating the microstructure of a wide variety of materials, in particular those that exhibit scale invariance over a range of scale lengths. The USANS spectrum from a material may show scale invariance that is indicative of a fractal structure in the material but it may also merely reflect the random nature of the sizes and shapes of the scattering entities that make up the material. USANS often allows us to measure the coherent elastic scattering cross sections well into the Guinier region. By analysing the measured scattering intensities using fractal derived models, values are obtained for certain parameters from which certain properties of the material may be obtained. In particular, the porosity can be obtained provided the average volume of the constituents of the material can be calculated. One of the parameters in the analysis is the correlation length, which may be interpreted as the scale length beyond which the material ceases to be fractal. However the relation between this parameter and an average particle size is not at all clear. To throw some light on this, we have used computer simulations to generate a number of fractal-like structures to obtain size distributions and porosities. USANS intensities were calculated from these structures and fitted using a standard fractal model to obtain values for the correlation lengths. The relation between porosity, average particle size and correlation length was investigated. Results are presented that show that the porosity of a fractal system is best calculated using the correlation length parameter to estimate the average particle volume

  3. Protein 3D structure computed from evolutionary sequence variation.

    Directory of Open Access Journals (Sweden)

    Debora S Marks

    Full Text Available The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org. This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of

  4. Computational acoustic modeling of cetacean vocalizations

    Science.gov (United States)

    Gurevich, Michael Dixon

    A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it

  5. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    Science.gov (United States)

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  6. Survey of Antenna Design Computer Models

    Science.gov (United States)

    1992-12-24

    May, A. Taflove, K. R. Umashankar, FDTD Analysis of Electromagnetic Wave Radiation from Systems Containing Horn Antennas , IEEE Trans. on Antennas and...sample grid structure of a waveguide slot antenna . 2D Open Boundary Pure of Symmewy Element Termination Lacotle of dot Figure 4. Grid structure for...finite element modeling of a waveguide slot antenna and surrounding air. 2 1 11 Finite elements are frequently shaped like triangles or rectangles in 2D

  7. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  8. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  9. Computer models for optimizing radiation therapy

    International Nuclear Information System (INIS)

    Duechting, W.

    1998-01-01

    The aim of this contribution is to outline how methods of system analysis, control therapy and modelling can be applied to simulate normal and malignant cell growth and to optimize cancer treatment as for instance radiation therapy. Based on biological observations and cell kinetic data, several types of models have been developed describing the growth of tumor spheroids and the cell renewal of normal tissue. The irradiation model is represented by the so-called linear-quadratic model describing the survival fraction as a function of the dose. Based thereon, numerous simulation runs for different treatment schemes can be performed. Thus, it is possible to study the radiation effect on tumor and normal tissue separately. Finally, this method enables a computer-assisted recommendation for an optimal patient-specific treatment schedule prior to clinical therapy. (orig.) [de

  10. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  11. Toxicity Assessment of Atrazine and Related Triazine Compounds in the Microtox Assay, and Computational Modeling for Their Structure-Activity Relationship

    Directory of Open Access Journals (Sweden)

    Jerzy Leszczynski

    2000-10-01

    Full Text Available The triazines are a group of chemically similar herbicides including atrazine, cyanazine, and propazine, primarily used to control broadleaf weeds. About 64 to 80 million lbs of atrazine alone are used each year in the United States, making it one of the two most widely used pesticides in the country. All triazines are somewhat persistent in water and mobile in soil. They are among the most frequently detected pesticides in groundwater. They are considered as possible human carcinogens (Group C based on an increase in mammary gland tumors in female laboratory animals. In this research, we performed the Microtox Assay to investigate the acute toxicity of a significant number of triazines including atrazine, atraton, ametryne, bladex, prometryne, and propazine, and some of their degradation products including atrazine desethyl, atrazine deisopropyl, and didealkyled triazine. Tests were carried out as described by Azur Environmental [1]. The procedure measured the relative acute toxicity of triazines, producing data for the calculation of triazine concentrations effecting 50% reduction in bioluminescence (EC50s. Quantitative structure-activity relationships (QSAR were examined based on the molecular properties obtained from quantum mechanical predictions performed for each compound. Toxicity tests yielded EC50 values of 39.87, 273.20, 226.80, 36.96, 81.86, 82.68, 12.74, 11.80, and 78.50 mg/L for atrazine, propazine, prometryne, atraton, atrazine desethyl, atrazine deisopropyl, didealkylated triazine, ametryne, and bladex, respectively; indicating that ametryne was the most toxic chemical while propazine was the least toxic. QSAR evaluation resulted in a coefficient of determination (r2 of 0.86, indicating a good value of toxicity prediction based on the chemical structures/properties of tested triazines.

  12. Parallel computation of geometry control in adaptive truss structures

    Science.gov (United States)

    Ramesh, A. V.; Utku, S.; Wada, B. K.

    1992-01-01

    The fast computation of geometry control in adaptive truss structures involves two distinct parts: the efficient integration of the inverse kinematic differential equations that govern the geometry control and the fast computation of the Jacobian, which appears on the right-hand-side of the inverse kinematic equations. This paper present an efficient parallel implementation of the Jacobian computation on an MIMD machine. Large speedup from the parallel implementation is obtained, which reduces the Jacobian computation to an O(M-squared/n) procedure on an n-processor machine, where M is the number of members in the adaptive truss. The parallel algorithm given here is a good candidate for on-line geometry control of adaptive structures using attached processors.

  13. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  14. Computer Models in Biomechanics From Nano to Macro

    CERN Document Server

    Kuhl, Ellen

    2013-01-01

    This book contains a collection of papers that were presented at the IUTAM Symposium on “Computer Models in Biomechanics: From Nano to Macro” held at Stanford University, California, USA, from August 29 to September 2, 2011. It contains state-of-the-art papers on: - Protein and Cell Mechanics: coarse-grained model for unfolded proteins, collagen-proteoglycan structural interactions in the cornea, simulations of cell behavior on substrates - Muscle Mechanics: modeling approaches for Ca2+–regulated smooth muscle contraction, smooth muscle modeling using continuum thermodynamical frameworks, cross-bridge model describing the mechanoenergetics of actomyosin interaction, multiscale skeletal muscle modeling - Cardiovascular Mechanics: multiscale modeling of arterial adaptations by incorporating molecular mechanisms, cardiovascular tissue damage, dissection properties of aortic aneurysms, intracranial aneurysms, electromechanics of the heart, hemodynamic alterations associated with arterial remodeling followin...

  15. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  16. Computational analysis of RNA structures with chemical probing data.

    Science.gov (United States)

    Ge, Ping; Zhang, Shaojie

    2015-06-01

    RNAs play various roles, not only as the genetic codes to synthesize proteins, but also as the direct participants of biological functions determined by their underlying high-order structures. Although many computational methods have been proposed for analyzing RNA structures, their accuracy and efficiency are limited, especially when applied to the large RNAs and the genome-wide data sets. Recently, advances in parallel sequencing and high-throughput chemical probing technologies have prompted the development of numerous new algorithms, which can incorporate the auxiliary structural information obtained from those experiments. Their potential has been revealed by the secondary structure prediction of ribosomal RNAs and the genome-wide ncRNA function annotation. In this review, the existing probing-directed computational methods for RNA secondary and tertiary structure analysis are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  18. Computer modeling for optimal placement of gloveboxes

    International Nuclear Information System (INIS)

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units

  19. Computer simulation of the formation of tweed and modulated structures in decomposition reactions

    International Nuclear Information System (INIS)

    Chen, S.; Morris, J.W. Jr.; Khachaturyan, A.G.

    1979-03-01

    A model of coarsening in a heterogeneous cubic alloy with cubic or tetragonal precipitates is proposed. According to the model the coarsening is controlled by the relaxation of the elastic strain energy. The computer simulation of coarsening demonstrates good agreement with electron microscopic observation of the structure and diffraction pattern

  20. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-03-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  1. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-11-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  2. Computer models track atmospheric radionuclides worldwide

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    The big sponge is what initiates call ARAC-the Atmospheric Release Advisory Capability-and it is vital to the clean-up after a nuclear accident. But this sobriquet doesn't refer to a propensity for mopping up radiation. It alludes to ARAC's ability to soak up data on weather conditions, regional geography, and the release of radionuclides into the atmosphere at thousands of sites around the globe. ARAC is a contingent of about 30 physicists, meteorologists, electronic engineers, computer scientists, and technicians who work at the Department of Energy's (DOE) Lawrence Livermore National Laboratory across the bay from San Francisco. The ARAC staff employs computer models to estimate the extent of surface contamination as well as radiation doses to population centers after hypothetical or real nuclear accidents. ARAC works fast. Within 15 minutes of an accident, it can produce a contour map estimating levels of radiation exposure within a 20-km radius of the accident site

  3. Computational modelling in materials at the University of the North

    CSIR Research Space (South Africa)

    Ngoepe, PE

    2005-09-01

    Full Text Available ;and Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ, U.K. *Author for correspondence. E-mail: ngoepep@ul.ac.za We review computational modelling studies in materials resulting from the National Research Foundation... and titanium metal.Aseriesofilmenite-structuredMeTiO3 compounds(Me= Fe, Mg, Zn, Mn) has been studied using DFT?PWP methods. Their structural and electronic properties have been explored in terms of Me and pressure.10 However, the DFT methods alone could...

  4. Computer modeling of Cannabinoid receptor type 1

    Directory of Open Access Journals (Sweden)

    Sapundzhi Fatima

    2018-01-01

    Full Text Available Cannabinoid receptors are important class of receptors as they are involved in various physiological processes such as appetite, pain-sensation, mood, and memory. It is important to design receptor-selective ligands in order to treat a particular disorder. The aim of the present study is to model the structure of cannabinoid receptor CB1 and to perform docking between obtained models and known ligands. Two models of CBR1 were prepared with two different methods (Modeller of Chimera and MOE. They were used for docking with GOLD 5.2. It was established a high correlation between inhibitory constant Ki of CB1 cannabinoid ligands and the ChemScore scoring function of GOLD, which concerns both models. This suggests that the models of the CB1 receptors obtained could be used for docking studies and in further investigation and design of new potential, selective and active cannabinoids with the desired effects.

  5. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    OpenAIRE

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2012-01-01

    Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric da...

  6. Computer models in the design of FXR

    Energy Technology Data Exchange (ETDEWEB)

    Vogtlin, G.; Kuenning, R.

    1980-01-01

    Lawrence Livermore National Laboratory is developing a 15 to 20 MeV electron accelerator with a beam current goal of 4 kA. This accelerator will be used for flash radiography and has a requirement of high reliability. Components being developed include spark gaps, Marx generators, water Blumleins and oil insulation systems. A SCEPTRE model was developed that takes into consideration the non-linearity of the ferrite and the time dependency of the emission from a field emitter cathode. This model was used to predict an optimum charge time to obtain maximum magnetic flux change from the ferrite. This model and its application will be discussed. JASON was used extensively to determine optimum locations and shapes of supports and insulators. It was also used to determine stress within bubbles adjacent to walls in oil. Computer results will be shown and bubble breakdown will be related to bubble size.

  7. Computer models in the design of FXR

    International Nuclear Information System (INIS)

    Vogtlin, G.; Kuenning, R.

    1980-01-01

    Lawrence Livermore National Laboratory is developing a 15 to 20 MeV electron accelerator with a beam current goal of 4 kA. This accelerator will be used for flash radiography and has a requirement of high reliability. Components being developed include spark gaps, Marx generators, water Blumleins and oil insulation systems. A SCEPTRE model was developed that takes into consideration the non-linearity of the ferrite and the time dependency of the emission from a field emitter cathode. This model was used to predict an optimum charge time to obtain maximum magnetic flux change from the ferrite. This model and its application will be discussed. JASON was used extensively to determine optimum locations and shapes of supports and insulators. It was also used to determine stress within bubbles adjacent to walls in oil. Computer results will be shown and bubble breakdown will be related to bubble size

  8. Computer-aided modeling framework – a generic modeling template for catalytic membrane fixed bed reactors

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2013-01-01

    This work focuses on development of computer-aided modeling framework. The framework is a knowledge-based system that is built on a generic modeling language and structured based on workflows for different general modeling tasks. The overall objective of this work is to support the model develope...... membrane fixed bed models is developed. The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene....

  9. Computational Tools To Model Halogen Bonds in Medicinal Chemistry.

    Science.gov (United States)

    Ford, Melissa Coates; Ho, P Shing

    2016-03-10

    The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry.

  10. Heart Modeling, Computational Physiology and the IUPS Physiome Project

    Science.gov (United States)

    Hunter, Peter J.

    The Physiome Project of the International Union of Physiological Sciences (IUPS) is attempting to provide a comprehensive framework for modelling the human body using computational methods which can incorporate the biochemistry, biophysics and anatomy of cells, tissues and organs. A major goal of the project is to use computational modelling to analyse integrative biological function in terms of underlying structure and molecular mechanisms. To support that goal the project is developing XML markup languages (CellML & FieldML) for encoding models, and software tools for creating, visualizing and executing these models. It is also establishing web-accessible physiological databases dealing with model-related data at the cell, tissue, organ and organ system levels. Two major developments in current medicine are, on the one hand, the much publicised genomics (and soon proteomics) revolution and, on the other, the revolution in medical imaging in which the physiological function of the human body can be studied with a plethora of imaging devices such as MRI, CT, PET, ultrasound, electrical mapping, etc. The challenge for the Physiome Project is to link these two developments for an individual - to use complementary genomic and medical imaging data, together with computational modelling tailored to the anatomy, physiology and genetics of that individual, for patient-specific diagnosis and treatment.

  11. Nonlinear Kalman Filtering in Affine Term Structure Models

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Dorion, Christian; Jacobs, Kris

    When the relationship between security prices and state variables in dynamic term structure models is nonlinear, existing studies usually linearize this relationship because nonlinear fi…ltering is computationally demanding. We conduct an extensive investigation of this linearization and analyze ...... in fi…xed income pricing with nonlinear relationships between the state vector and the observations, such as the estimation of term structure models using coupon bonds and the estimation of quadratic term structure models....

  12. Handbook of structural equation modeling

    CERN Document Server

    Hoyle, Rick H

    2012-01-01

    The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu

  13. Computational model of a whole tree combustor

    Energy Technology Data Exchange (ETDEWEB)

    Bryden, K.M.; Ragland, K.W. [Univ. of Wisconsin, Madison, WI (United States)

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  14. Dual-code quantum computation model

    Science.gov (United States)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  15. Nonlinear structural mechanics theory, dynamical phenomena and modeling

    CERN Document Server

    Lacarbonara, Walter

    2013-01-01

    Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...

  16. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  17. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  18. A computer graphics program system for protein structure representation.

    Science.gov (United States)

    Ross, A M; Golub, E E

    1988-01-01

    We have developed a computer graphics program system for the schematic representation of several protein secondary structure analysis algorithms. The programs calculate the probability of occurrence of alpha-helix, beta-sheet and beta-turns by the method of Chou and Fasman and assign unique predicted structure to each residue using a novel conflict resolution algorithm based on maximum likelihood. A detailed structure map containing secondary structure, hydrophobicity, sequence identity, sequence numbering and the location of putative N-linked glycosylation sites is then produced. In addition, helical wheel diagrams and hydrophobic moment calculations can be performed to further analyze the properties of selected regions of the sequence. As they require only structure specification as input, the graphics programs can easily be adapted for use with other secondary structure prediction schemes. The use of these programs to analyze protein structure-function relationships is described and evaluated. PMID:2832829

  19. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  20. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  1. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  2. Modeling the Structural Response of Reinforced Glass Beams using an SLA Scheme

    NARCIS (Netherlands)

    Louter, P.C.; Graaf, van de Anne; Rots, J.G.; Bos, Freek; Louter, Pieter Christiaan; Veer, Fred

    2010-01-01

    This paper investigates whether a novel computational sequentially linear analysis (SLA) technique, which is especially developed for modeling brittle material response, is applicable for modeling the structural response of metal reinforced glass beams. To do so, computational SLA results are

  3. Reinforcement Toolbox, a Parametric Reinforcement Modelling Tool for Curved Surface Structures

    NARCIS (Netherlands)

    Lauppe, J.; Rolvink, A.; Coenders, J.L.

    2013-01-01

    This paper presents a computational strategy and parametric modelling toolbox which aim at enhancing the design- and production process of reinforcement in freeform curved surface structures. The computational strategy encompasses the necessary steps of raising an architectural curved surface model

  4. Computer Modelling of Hafnium Doping in Lithium Niobate

    Directory of Open Access Journals (Sweden)

    Romel M. Araujo

    2018-03-01

    Full Text Available Lithium niobate (LiNbO3 is an important technological material with good electro-optic, acousto-optic, elasto-optic, piezoelectric and nonlinear properties. Doping LiNbO3 with hafnium (Hf has been shown to improve the resistance of the material to optical damage. Computer modelling provides a useful means of determining the properties of doped and undoped LiNbO3, including its defect chemistry, and the effect of doping on the structure. In this paper, Hf-doped LiNbO3 has been modelled, and the final defect configurations are found to be consistent with experimental results.

  5. Computational Flow Modeling of Human Upper Airway Breathing

    Science.gov (United States)

    Mylavarapu, Goutham

    Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady

  6. Computing a new family of shape descriptors for protein structures

    DEFF Research Database (Denmark)

    Røgen, Peter; Sinclair, Robert

    2003-01-01

    The large-scale 3D structure of a protein can be represented by the polygonal curve through the carbon a atoms of the protein backbone. We introduce an algorithm for computing the average number of times that a given configuration of crossings on such polygonal curves is seen, the average being...... taken over all directions in space. Hereby, we introduce a new family of global geometric measures of protein structures, which we compare with the so-called generalized Gauss integrals....

  7. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  8. 3-D computational model of poly (lactic acid)/halloysite nanocomposites: Predicting elastic properties and stress analysis

    DEFF Research Database (Denmark)

    De Silva, R. T.; Pasbakhsh, Pooria; Goh, K. L.

    2014-01-01

    A real-structure based 3-D micromechanical computational model of poly (lactic acid) nanocomposites reinforced by randomly oriented halloysite nanotubes (HNTs) was developed and compared with an idealized model (conventional model) and experimental results. The developed idealized model consists ...

  9. Getting computer models to communicate; Faire communiquer les modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, Ch. [Electricite de France (EDF), 75 - Paris (France). Dept. Mecanique et Modeles Numeriques; Erhard, P. [Electricite de France (EDF), 75 - Paris (France). Dept. Physique des Reacteurs

    1999-07-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  10. Advances in computational dynamics of particles, materials and structures a unified approach

    CERN Document Server

    Har, Jason

    2012-01-01

    Computational methods for the modeling and simulation of the dynamic response and behavior of particles, materials and structural systems have had a profound influence on science, engineering and technology. Complex science and engineering applications dealing with complicated structural geometries and materials that would be very difficult to treat using analytical methods have been successfully simulated using computational tools. With the incorporation of quantum, molecular and biological mechanics into new models, these methods are poised to play an even bigger role in the future. Ad

  11. An investigation into the organisation and structural design of multi-computer process-control systems

    International Nuclear Information System (INIS)

    Gertenbach, W.P.

    1981-12-01

    A multi-computer system for the collection of data and control of distributed processes has been developed. The structure and organisation of this system, a study of the general theory of systems and of modularity was used as a basis for an investigation into the organisation and structured design of multi-computer process-control systems. A multi-dimensional model of multi-computer process-control systems was developed. In this model a strict separation was made between organisational properties of multi-computer process-control systems and implementation dependant properties. The model was based on the principles of hierarchical analysis and modularity. Several notions of hierarchy were found necessary to describe fully the organisation of multi-computer systems. A new concept, that of interconnection abstraction was identified. This concept is an extrapolation of implementation techniques in the hardware implementation area to the software implementation area. A synthesis procedure which relies heavily on the above described analysis of multi-computer process-control systems is proposed. The above mentioned model, and a set of performance factors which depend on a set of identified design criteria, were used to constrain the set of possible solutions to the multi-computer process-control system synthesis-procedure

  12. Computational Approaches for Revealing the Structure of Membrane Transporters: Case Study on Bilitranslocase

    Directory of Open Access Journals (Sweden)

    Katja Venko

    2017-01-01

    Full Text Available The structural and functional details of transmembrane proteins are vastly underexplored, mostly due to experimental difficulties regarding their solubility and stability. Currently, the majority of transmembrane protein structures are still unknown and this present a huge experimental and computational challenge. Nowadays, thanks to X-ray crystallography or NMR spectroscopy over 3000 structures of membrane proteins have been solved, among them only a few hundred unique ones. Due to the vast biological and pharmaceutical interest in the elucidation of the structure and the functional mechanisms of transmembrane proteins, several computational methods have been developed to overcome the experimental gap. If combined with experimental data the computational information enables rapid, low cost and successful predictions of the molecular structure of unsolved proteins. The reliability of the predictions depends on the availability and accuracy of experimental data associated with structural information. In this review, the following methods are proposed for in silico structure elucidation: sequence-dependent predictions of transmembrane regions, predictions of transmembrane helix–helix interactions, helix arrangements in membrane models, and testing their stability with molecular dynamics simulations. We also demonstrate the usage of the computational methods listed above by proposing a model for the molecular structure of the transmembrane protein bilitranslocase. Bilitranslocase is bilirubin membrane transporter, which shares similar tissue distribution and functional properties with some of the members of the Organic Anion Transporter family and is the only member classified in the Bilirubin Transporter Family. Regarding its unique properties, bilitranslocase is a potentially interesting drug target.

  13. Probabilistic models for structured sparsity

    DEFF Research Database (Denmark)

    Andersen, Michael Riis

    of each time series is decomposed into a non-negative linear combination of elements from a dictionary of shared covariance matrix components. A variational Bayes algorithm is derived for approximate posterior inference. The proposed model is validated using a functional magnetic resonance imaging (f......Sparsity has become an increasingly popular choice of regularization in machine learning and statistics. The sparsity assumption for a matrixX means that most of the entries in X are equal to exactly zero. Structured sparsity is generalization of sparsity and assumes that the set of locations...... of the non-zero coefficients in X contains structure that can be exploited. This thesis deals with probabilistic models for structured sparsity for regularization of ill-posed problems. The aim of the thesis is two-fold; to construct sparsity promoting prior distributions for structured sparsity...

  14. Modeling Reality: How Computers Mirror Life

    International Nuclear Information System (INIS)

    Inoue, J-I

    2005-01-01

    Modeling Reality: How Computers Mirror Life covers a wide range of modern subjects in complex systems, suitable not only for undergraduate students who want to learn about modelling 'reality' by using computer simulations, but also for researchers who want to learn something about subjects outside of their majors and need a simple guide. Readers are not required to have specialized training before they start the book. Each chapter is organized so as to train the reader to grasp the essential idea of simulating phenomena and guide him/her towards more advanced areas. The topics presented in this textbook fall into two categories. The first is at graduate level, namely probability, statistics, information theory, graph theory, and the Turing machine, which are standard topics in the course of information science and information engineering departments. The second addresses more advanced topics, namely cellular automata, deterministic chaos, fractals, game theory, neural networks, and genetic algorithms. Several topics included here (neural networks, game theory, information processing, etc) are now some of the main subjects of statistical mechanics, and many papers related to these interdisciplinary fields are published in Journal of Physics A: Mathematical and General, so readers of this journal will be familiar with the subject areas of this book. However, each area is restricted to an elementary level and if readers wish to know more about the topics they are interested in, they will need more advanced books. For example, on neural networks, the text deals with the back-propagation algorithm for perceptron learning. Nowadays, however, this is a rather old topic, so the reader might well choose, for example, Introduction to the Theory of Neural Computation by J Hertz et al (Perseus books, 1991) or Statistical Physics of Spin Glasses and Information Processing by H Nishimori (Oxford University Press, 2001) for further reading. Nevertheless, this book is worthwhile

  15. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  16. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  17. Computed Structure of Grain Boundaries Compared with TEM Observations

    NARCIS (Netherlands)

    Hosson, J.Th.M. De; Heringa, J.R.; Schapink, F.W.; Evans, J.H.; Veen, A. van

    1984-01-01

    Employing computer simulation techniques several studies of the relaxation of atoms in coincidence type grain boundaries have been performed in recent years. Often it is difficult to obtain a clear representation of the relaxed boundary structure, especially in the case of small atomic

  18. Computing a new family of shape descriptors for protein structures

    DEFF Research Database (Denmark)

    Røgen, Peter; Sinclair, Robert

    2003-01-01

    The large-scale 3D structure of a protein can be represented by the polygonal curve through the carbon a atoms of the protein backbone. We introduce an algorithm for computing the average number of times that a given configuration of crossings on such polygonal curves is seen, the average being...

  19. TRUST MANAGEMENT MODEL FOR CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Somesh Kumar Prajapati

    2013-04-01

    Full Text Available Software as a service or (SaaS is a new software development and deployment paradigm over the cloud and offers Information Technology services dynamically as “on-demand” basis over the internet. Trust is one of the fundamental security concepts on storing and delivering such services. In general, trust factors are integrated into such existent security frameworks in order to add a security level to entities collaborations through the trust relationship. However, deploying trust factor in the secured cloud environment are more complex engineering task due to the existence of heterogeneous types of service providers and consumers. In this paper, a formal trust management model has been introduced to manage the trust and its properties for SaaS in cloud computing environment. The model is capable to represent the direct trust, recommended trust, reputation etc, formally. For the analysis of the trust properties in the cloud environment, the proposed approach estimates the trust value and uncertainty of each peer by computing decay function, number of positive interactions, reputation factor and satisfaction level for the collected information.

  20. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  1. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  2. Computational Modeling of Photonic Crystal Microcavity Single-Photon Emitters

    Science.gov (United States)

    Saulnier, Nicole A.

    Conventional cryptography is based on algorithms that are mathematically complex and difficult to solve, such as factoring large numbers. The advent of a quantum computer would render these schemes useless. As scientists work to develop a quantum computer, cryptographers are developing new schemes for unconditionally secure cryptography. Quantum key distribution has emerged as one of the potential replacements of classical cryptography. It relics on the fact that measurement of a quantum bit changes the state of the bit and undetected eavesdropping is impossible. Single polarized photons can be used as the quantum bits, such that a quantum system would in some ways mirror the classical communication scheme. The quantum key distribution system would include components that create, transmit and detect single polarized photons. The focus of this work is on the development of an efficient single-photon source. This source is comprised of a single quantum dot inside of a photonic crystal microcavity. To better understand the physics behind the device, a computational model is developed. The model uses Finite-Difference Time-Domain methods to analyze the electromagnetic field distribution in photonic crystal microcavities. It uses an 8-band k · p perturbation theory to compute the energy band structure of the epitaxially grown quantum dots. We discuss a method that combines the results of these two calculations for determining the spontaneous emission lifetime of a quantum dot in bulk material or in a microcavity. The computational models developed in this thesis are used to identify and characterize microcavities for potential use in a single-photon source. The computational tools developed are also used to investigate novel photonic crystal microcavities that incorporate 1D distributed Bragg reflectors for vertical confinement. It is found that the spontaneous emission enhancement in the quasi-3D cavities can be significantly greater than in traditional suspended slab

  3. A computational model of consciousness for artificial emotional agents

    Directory of Open Access Journals (Sweden)

    Kotov A. A.

    2017-09-01

    Full Text Available Background. The structure of consciousness has long been a cornerstone problem in the cognitive sciences. Recently it took on applied significance in the design of computer agents and mobile robots. This problem can thus be examined from perspectives of phi­losophy, neuropsychology, and computer modeling. Objective. In the present paper, we address the problem of the computational model of consciousness by designing computer agents aimed at simulating “speech understand­ing” and irony. Further, we look for a “minimal architecture” that is able to mimic the effects of consciousness in computing systems. Method. For the base architecture, we used a software agent, which was programmed to operate with scripts (productions or inferences, to process incoming texts (or events by extracting their semantic representations, and to select relevant reactions. Results. It is shown that the agent can simulate speech irony by replacing a direct aggressive behavior with a positive sarcastic utterance. This is achieved by balancing be­tween several scripts available to the agent. We suggest that the extension of this scheme may serve as a minimal architecture of consciousness, wherein the agent distinguishes own representations and potential cognitive representations of other agents. Within this architecture, there are two stages of processing. First, the agent activates several scripts by placing their if-statements or actions (inferences within a processing scope. Second, the agent differentiates the scripts depending on their activation by another script. This multilevel scheme allows the agent to simulate imaginary situations, one’s own imagi­nary actions, and imaginary actions of other agents, i.e. the agent demonstrates features considered essential for conscious agents in the philosophy of mind and cognitive psy­chology. Conclusion. Our computer systems for understanding speech and simulation of irony can serve as a basis for further

  4. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  5. Generalized Swept Mid-structure for Polygonal Models

    KAUST Repository

    Martin, Tobias

    2012-05-01

    We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.

  6. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  7. Generalized added masses computation for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1983-01-01

    The aim of this paper a description of a method to simulate the dynamic effect of a fluid between two structures by means of an added mass and an added stiffness. The method is based on a potential theory which assumes the fluid is inviscid and incompressible (the case of compressibility is discussed); a solution of the corresponding field equation is given as a superposition of elementary conditions (i.e. applicable to elementary boundary conditions). Consequently the pressure and displacements of the fluid on the boundary are given as a function of the series coefficients; the ''work lost'' (i.e. the work done by the pressures on the difference between actual and estimated displacements) is minimized, in this way the expansion coefficients are related to the displacements on the boundaries. Virtual work procedures are then used to compute added masses. The particular case of a free surface (with gravity effects) is discussed, it is shown how the effect can be modelled by means of an added stiffness term. Some examples relative to vibrations in reservoirs are given and discussed. (orig.)

  8. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  9. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  10. Secure data structures based on multi-party computation

    DEFF Research Database (Denmark)

    Toft, Tomas

    2011-01-01

    This work considers data structures based on multi-party computation (MPC) primitives: structuring secret (e.g. secret shared and potentially unknown) data such that it can both be queried and updated efficiently. Implementing an oblivious RAM (ORAM) using MPC allows any existing data structure t...... shown that any information theoretically secure ORAM with n memory locations requires at least log n random bits per read/write to hide the access pattern. In contrast, the present construction achieves security with a completely deterministic access pattern....

  11. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    Science.gov (United States)

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour

  12. Using Computational and Mechanical Models to Study Animal Locomotion

    Science.gov (United States)

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026

  13. Using computational and mechanical models to study animal locomotion.

    Science.gov (United States)

    Miller, Laura A; Goldman, Daniel I; Hedrick, Tyson L; Tytell, Eric D; Wang, Z Jane; Yen, Jeannette; Alben, Silas

    2012-11-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms' performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: "Integrating living and physical systems."

  14. Computational modeling of the sugar-lectin interaction.

    Science.gov (United States)

    Neumann, Dirk; Lehr, Claus-Michael; Lenhof, Hans-Peter; Kohlbacher, Oliver

    2004-03-03

    In the last few years numerous experimental studies have shed light onto the details of the lectin-carbohydrate interaction. X-ray crystallography and NMR spectroscopy have been used to elucidate the structures of lectins, sugars, and their complexes. In addition, an increasing number of experimental methods has been employed to determine the thermodynamic and kinetic parameters of the binding process. Based on this experimental data, computational methods have been developed to model and predict these interactions. A plethora of techniques from Molecular Modeling and Computational Chemistry have been applied to the problem and current models achieve high-quality predictions. These successes are based on both new theoretical approaches and reliable experimental data. The aim of the present article is to outline the most relevant computational and experimental methods applied in the field of lectin-carbohydrate interaction and to give an overview of the current state of the art in the modeling of these interactions with a focus on plant lectins.

  15. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  16. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  17. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  18. Prediction of the behavior of pedestrian bridges using computer models

    Directory of Open Access Journals (Sweden)

    Jonathan José Cala Monroy

    2017-07-01

    Full Text Available Introduction: The present article is aimed to present a brief introduction of the issues related to the low-frequency vibrations, by indicating human walking as its relevant source which affecting structures of the footbridges and is turned into inconveniences to the pedestrian traffic. Objective: The main objective of this research paper is to explain the most common methods used by engineers for the evaluation of the vibrations and their effects as well as their limitations, furthermore a computer modeling technique was developed in order to approach it to the reality of the phenomenon of vibrations in pedestrian bridges. Methodology: The present work was divided into main phases: The first phase was a conceptual bibliographical review of the subject of floor vibrations by focusing on the use of the Design Guide No. 11 of the American Institute of Steel Constructions, with regard to the second phase, it had to do with the developing of a computer model which included a definition of variables, the elaboration of a dynamic model of the structure, the calibration of the model, the evaluation of the parameters under study and the analysis of results and conclusions. Results: Consequently, and according to the preliminary stages, the results of the acceleration were obtained to different frequencies and to different degrees of damping by observing that the chosen sample was potentially susceptible between four and eight Hz ranges, hence when resonances took place the mentioned structure presented a peak acceleration above the threshold recommended by human beings comfort related to pedestrian bridges. Conclusions: To conclude it can be said that through the appropriate modeling techniques and finite elements convenient and reliable results should be accomplished that leading the design process of structures as pedestrian bridges.

  19. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  20. Computational Approach for Quantifying Structural Disorder in Biomolecular Lattices

    Science.gov (United States)

    Bratton, Clayton; Reiser, Karen; Knoesen, Andre; Yankelevich, Diego; Wang, Mingshi; Rocha-Mendoza, Israel

    2009-11-01

    We have developed a novel computational approach for quantifying structural disorder in biomolecular lattices with nonlinear susceptibility based on analysis of polarization-modulated second harmonic signal. Transient, regional disorder at the level of molecular organization is identified using a novel signal-processing algorithms sufficiently compact for near real-time analysis with a desktop computer. Global disorder and regional disorder within the biostructure are assessed and scored using a multiple methodologies. Experimental results suggest our signal processing method represents a robust, scalable tool that allows us to detect both regional and global alterations in signal characteristics of biostructures with a high degree of discrimination.

  1. Control mechanism of double-rotator-structure ternary optical computer

    Science.gov (United States)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  2. The Computational Properties of a Simplified Cortical Column Model.

    Science.gov (United States)

    Cain, Nicholas; Iyer, Ramakrishnan; Koch, Christof; Mihalas, Stefan

    2016-09-01

    The mammalian neocortex has a repetitious, laminar structure and performs functions integral to higher cognitive processes, including sensory perception, memory, and coordinated motor output. What computations does this circuitry subserve that link these unique structural elements to their function? Potjans and Diesmann (2014) parameterized a four-layer, two cell type (i.e. excitatory and inhibitory) model of a cortical column with homogeneous populations and cell type dependent connection probabilities. We implement a version of their model using a displacement integro-partial differential equation (DiPDE) population density model. This approach, exact in the limit of large homogeneous populations, provides a fast numerical method to solve equations describing the full probability density distribution of neuronal membrane potentials. It lends itself to quickly analyzing the mean response properties of population-scale firing rate dynamics. We use this strategy to examine the input-output relationship of the Potjans and Diesmann cortical column model to understand its computational properties. When inputs are constrained to jointly and equally target excitatory and inhibitory neurons, we find a large linear regime where the effect of a multi-layer input signal can be reduced to a linear combination of component signals. One of these, a simple subtractive operation, can act as an error signal passed between hierarchical processing stages.

  3. Computational Models for Analysis of Illicit Activities

    DEFF Research Database (Denmark)

    Nizamani, Sarwat

    to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...... result in a fully evolved network. This method of network evolution can help intelligence security analysts to understand the structure of the network. For suspicious emails detection and email author identification, a cluster-based text classification model has been proposed. The model outperformed...... traditional models for both of the tasks. Apart from these globally organized crimes and cybercrimes, there happen specific world issues which affect geographic locations and take the form of bursts of public violence. These kinds of issues have received little attention by the academicians. These issues have...

  4. Evaluation of Marine Corps Manpower Computer Simulation Model

    Science.gov (United States)

    2016-12-01

    overall end strength are maintained. To assist their mission, an agent-based computer simulation model was developed in the Java computer language...maintained. To assist their mission, an agent-based computer simulation model was developed in the Java computer language. This thesis investigates that...a simulation software that models business practices to assist that business in its “ability to analyze and make decisions on how to improve (their

  5. Model to Implement Virtual Computing Labs via Cloud Computing Services

    OpenAIRE

    Washington Luna Encalada; José Luis Castillo Sequera

    2017-01-01

    In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the...

  6. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  7. A computational model for dynamic vision

    Science.gov (United States)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  8. Accurate modeling of parallel scientific computations

    Science.gov (United States)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  9. Computational Model for Internal Relative Humidity Distributions in Concrete

    Directory of Open Access Journals (Sweden)

    Wondwosen Ali

    2014-01-01

    Full Text Available A computational model is developed for predicting nonuniform internal relative humidity distribution in concrete. Internal relative humidity distribution is known to have a direct effect on the nonuniform drying shrinkage strains. These nonuniform drying shrinkage strains result in the buildup of internal stresses, which may lead to cracking of concrete. This may be particularly true at early ages of concrete since the concrete is relatively weak while the difference in internal relative humidity is probably high. The results obtained from this model can be used by structural and construction engineers to predict critical drying shrinkage stresses induced due to differential internal humidity distribution. The model uses finite elment-finite difference numerical methods. The finite element is used to space discretization while the finite difference is used to obtain transient solutions of the model. The numerical formulations are then programmed in Matlab. The numerical results were compared with experimental results found in the literature and demonstrated very good agreement.

  10. A computational model of fraction arithmetic.

    Science.gov (United States)

    Braithwaite, David W; Pyke, Aryn A; Siegler, Robert S

    2017-10-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it with the problems from a widely used textbook series. The simulation generated many phenomena of children's fraction arithmetic performance through a small number of common learning mechanisms operating on a biased input set. The biases were not unique to this textbook series-they were present in 2 other textbook series as well-nor were the phenomena unique to a particular sample of children-they were present in another sample as well. Among other phenomena, the model predicted the high difficulty of fraction division, variable strategy use by individual children and on individual problems, relative frequencies of different types of strategy errors on different types of problems, and variable effects of denominator equality on the four arithmetic operations. The model also generated nonintuitive predictions regarding the relative difficulties of several types of problems and the potential effectiveness of a novel instructional approach. Perhaps the most general lesson of the findings is that the statistical distribution of problems that learners encounter can influence mathematics learning in powerful and nonintuitive ways. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Computational aspects of N-mixture models.

    Science.gov (United States)

    Dennis, Emily B; Morgan, Byron J T; Ridout, Martin S

    2015-03-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105-115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. © 2014 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  12. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  13. Computational model of heterogeneous heating in melanin

    Science.gov (United States)

    Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.

    2015-03-01

    Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.

  14. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  15. Development of tools and models for computational fracture assessment

    International Nuclear Information System (INIS)

    Talja, H.; Santaoja, K.

    1998-01-01

    The aim of the work presented in this paper has been to develop and test new computational tools and theoretically more sound methods for fracture mechanical analysis. The applicability of the engineering integrity assessment system MASI for evaluation of piping components has been extended. The most important motivation for the theoretical development have been the well-known fundamental limitations in the validity of J-integral, which limits its applicability in many important practical safety assessment cases. Examples are extensive plastic deformation, multimaterial structures and ascending loading paths (especially warm prestress, WPS). Further, the micromechanical Gurson model has been applied to several reactor pressure vessel materials. Special attention is paid to the transferability of Gurson model parameters from tensile test results to prediction of ductile failure behaviour of cracked structures. (author)

  16. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  17. Achievements and challenges in structural bioinformatics and computational biophysics.

    Science.gov (United States)

    Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J

    2015-01-01

    The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.

  18. Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong

    2015-01-01

    used in the aero-elastic code FLEX5. The new code, MIRASFLEX, in general shows good agreement with the standard aero-elastic codes FLEX5 and FAST for various test cases. The structural model in MIRAS-FLEX acts to reduce the aerodynamic load computed by MIRAS, particularly near the tip and at high wind...

  19. Track structure in biological models.

    Science.gov (United States)

    Curtis, S B

    1986-01-01

    High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation.

  20. Perspectives for computational modeling of cell replacement for neurological disorders

    Energy Technology Data Exchange (ETDEWEB)

    Aimone, James B.; Weick, Jason P.

    2013-01-01

    Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.

  1. Perspectives for computational modeling of cell replacement for neurological disorders

    Energy Technology Data Exchange (ETDEWEB)

    Aimone, James B.; Weick, Jason P.

    2013-01-01

    In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.

  2. Perspectives for computational modeling of cell replacement for neurological disorders

    Directory of Open Access Journals (Sweden)

    James B Aimone

    2013-11-01

    Full Text Available Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.

  3. Application of computer simulated persons in indoor environmental modeling

    DEFF Research Database (Denmark)

    Topp, C.; Nielsen, P. V.; Sørensen, Dan Nørtoft

    2002-01-01

    Computer simulated persons are often applied when the indoor environment is modeled by computational fluid dynamics. The computer simulated persons differ in size, shape, and level of geometrical complexity, ranging from simple box or cylinder shaped heat sources to more humanlike models. Little...

  4. A computational model of liver iron metabolism.

    Directory of Open Access Journals (Sweden)

    Simon Mitchell

    Full Text Available Iron is essential for all known life due to its redox properties; however, these same properties can also lead to its toxicity in overload through the production of reactive oxygen species. Robust systemic and cellular control are required to maintain safe levels of iron, and the liver seems to be where this regulation is mainly located. Iron misregulation is implicated in many diseases, and as our understanding of iron metabolism improves, the list of iron-related disorders grows. Recent developments have resulted in greater knowledge of the fate of iron in the body and have led to a detailed map of its metabolism; however, a quantitative understanding at the systems level of how its components interact to produce tight regulation remains elusive. A mechanistic computational model of human liver iron metabolism, which includes the core regulatory components, is presented here. It was constructed based on known mechanisms of regulation and on their kinetic properties, obtained from several publications. The model was then quantitatively validated by comparing its results with previously published physiological data, and it is able to reproduce multiple experimental findings. A time course simulation following an oral dose of iron was compared to a clinical time course study and the simulation was found to recreate the dynamics and time scale of the systems response to iron challenge. A disease state simulation of haemochromatosis was created by altering a single reaction parameter that mimics a human haemochromatosis gene (HFE mutation. The simulation provides a quantitative understanding of the liver iron overload that arises in this disease. This model supports and supplements understanding of the role of the liver as an iron sensor and provides a framework for further modelling, including simulations to identify valuable drug targets and design of experiments to improve further our knowledge of this system.

  5. Advances in Computational Fluid-Structure Interaction and Flow Simulation Conference

    CERN Document Server

    Takizawa, Kenji

    2016-01-01

    This contributed volume celebrates the work of Tayfun E. Tezduyar on the occasion of his 60th birthday. The articles it contains were born out of the Advances in Computational Fluid-Structure Interaction and Flow Simulation (AFSI 2014) conference, also dedicated to Prof. Tezduyar and held at Waseda University in Tokyo, Japan on March 19-21, 2014. The contributing authors represent a group of international experts in the field who discuss recent trends and new directions in computational fluid dynamics (CFD) and fluid-structure interaction (FSI). Organized into seven distinct parts arranged by thematic topics, the papers included cover basic methods and applications of CFD, flows with moving boundaries and interfaces, phase-field modeling, computer science and high-performance computing (HPC) aspects of flow simulation, mathematical methods, biomedical applications, and FSI. Researchers, practitioners, and advanced graduate students working on CFD, FSI, and related topics will find this collection to be a defi...

  6. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    Science.gov (United States)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  7. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  8. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  9. Exploratory Topology Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin

    2016-01-01

    The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS...... that enables designers and engineers to iteratively construct and manipulate form-active hybrid assembly topology on the fly. The pipeline implements Kangaroo2's projection-based methods for modelling hybrid structures consisting of slender beams and cable networks. A selection of design modelling sketches...... is presented in which the developed modelling pipeline has been integrated to explore the design space delineated by FAHS....

  10. Description of a method for computing fluid-structure interaction

    International Nuclear Information System (INIS)

    Gantenbein, F.

    1982-02-01

    A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr

  11. Computer modeling of the Cabriolet Event

    International Nuclear Information System (INIS)

    Kamegai, M.

    1979-01-01

    Computer modeling techniques are described for calculating the results of underground nuclear explosions at depths shallow enough to produce cratering. The techniques are applied to the Cabriolet Event, a well-documented nuclear excavation experiment, and the calculations give good agreement with the experimental results. It is concluded that, given data obtainable by outside observers, these modeling techniques are capable of verifying the yield and depth of underground nuclear cratering explosions, and that they could thus be useful in monitoring another country's compliance with treaty agreements on nuclear testing limitations. Several important facts emerge from the study: (1) seismic energy is produced by only a fraction of the nuclear yield, a fraction depending strongly on the depth of shot and the mechanical properties of the surrounding rock; (2) temperature of the vented gas can be predicted accurately only if good equations of state are available for the rock in the detonation zone; and (3) temperature of the vented gas is strongly dependent on the cooling effect, before venting, of mixing with melted rock in the expanding cavity and, to a lesser extent, on the cooling effect of water in the rock

  12. Random matrix model of adiabatic quantum computing

    International Nuclear Information System (INIS)

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-01-01

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

  13. Computational modeling of acute myocardial infarction.

    Science.gov (United States)

    Sáez, P; Kuhl, E

    2016-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size.

  14. Computational and Organotypic Modeling of Microcephaly ...

    Science.gov (United States)

    Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computational and experimental models to probe the underlying molecular targets, cellular consequences, and biological processes. We describe an Adverse Outcome Pathway (AOP) framework for microcephaly derived from literature on all gene-, chemical-, or viral- effects and brain development. Overlap with NTDs is likely, although the AOP connections identified here focused on microcephaly as the adverse outcome. A query of the Mammalian Phenotype Browser database for ‘microcephaly’ (MP:0000433) returned 85 gene associations; several function in microtubule assembly and centrosome cycle regulated by (microcephalin, MCPH1), a gene for primary microcephaly in humans. The developing ventricular zone is the likely target. In this zone, neuroprogenitor cells (NPCs) self-replicate during the 1st trimester setting brain size, followed by neural differentiation of the neocortex. Recent studies with human NPCs confirmed infectivity with Zika virions invoking critical cell loss (apoptosis) of precursor NPCs; similar findings have been shown with fetal alcohol or methylmercury exposure in rodent studies, leading to mathematical models of NPC dynamics in size determination of the ventricular zone. A key event

  15. Computer Simulation of Atoms Nuclei Structure Using Information Coefficients of Proportionality

    OpenAIRE

    Labushev, Mikhail M.

    2012-01-01

    The latest research of the proportionality of atomic weights of chemical elements made it possible to obtain 3 x 3 matrices for the calculation of information coefficients of proportionality Ip that can be used for 3D modeling of the structure of atom nucleus. The results of computer simulation show high potential of nucleus structure research for the characterization of their chemical and physical properties.

  16. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.

  17. Dimensionality reduction in computational demarcation of protein tertiary structures.

    Science.gov (United States)

    Joshi, Rajani R; Panigrahi, Priyabrata R; Patil, Reshma N

    2012-06-01

    Predictive classification of major structural families and fold types of proteins is investigated deploying logistic regression. Only five to seven dimensional quantitative feature vector representations of tertiary structures are found adequate. Results for benchmark sample of non-homologous proteins from SCOP database are presented. Importance of this work as compared to homology modeling and best-known quantitative approaches is highlighted.

  18. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  19. Structural Agricultural Land Use Modelling

    OpenAIRE

    Fezzi, Carlo; Bateman, Ian J.

    2009-01-01

    This paper develops a structural econometric model of agricultural land use and production based on the joint multi-output technology representation introduced by Chambers and Just (1989). Starting from a flexible specification of the farm profit function we derive land use allocation, input applications, crops yield and livestock number equations in a joint and theoretically consistent framework. We present an empirical application using fine-scale spatial data covering the entirety of Engla...

  20. Editorial: Modelling and computational challenges in granular materials

    OpenAIRE

    Weinhart, Thomas; Thornton, Anthony Richard; Einav, Itai

    2015-01-01

    This is the editorial for the special issue on “Modelling and computational challenges in granular materials” in the journal on Computational Particle Mechanics (CPM). The issue aims to provide an opportunity for physicists, engineers, applied mathematicians and computational scientists to discuss the current progress and latest advancements in the field of advanced numerical methods and modelling of granular materials. The focus will be on computational methods, improved algorithms and the m...

  1. Propagation of Computer Virus under Human Intervention: A Dynamical Model

    OpenAIRE

    Chenquan Gan; Xiaofan Yang; Wanping Liu; Qingyi Zhu; Xulong Zhang

    2012-01-01

    This paper examines the propagation behavior of computer virus under human intervention. A dynamical model describing the spread of computer virus, under which a susceptible computer can become recovered directly and an infected computer can become susceptible directly, is proposed. Through a qualitative analysis of this model, it is found that the virus-free equilibrium is globally asymptotically stable when the basic reproduction number R0≤1, whereas the viral equilibrium is globally asympt...

  2. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Computational Models of Laryngeal Aerodynamics: Potentials and Numerical Costs.

    Science.gov (United States)

    Sadeghi, Hossein; Kniesburges, Stefan; Kaltenbacher, Manfred; Schützenberger, Anne; Döllinger, Michael

    2018-02-07

    Human phonation is based on the interaction between tracheal airflow and laryngeal dynamics. This fluid-structure interaction is based on the energy exchange between airflow and vocal folds. Major challenges in analyzing the phonatory process in-vivo are the small dimensions and the poor accessibility of the region of interest. For improved analysis of the phonatory process, numerical simulations of the airflow and the vocal fold dynamics have been suggested. Even though most of the models reproduced the phonatory process fairly well, development of comprehensive larynx models is still a subject of research. In the context of clinical application, physiological accuracy and computational model efficiency are of great interest. In this study, a simple numerical larynx model is introduced that incorporates the laryngeal fluid flow. It is based on a synthetic experimental model with silicone vocal folds. The degree of realism was successively increased in separate computational models and each model was simulated for 10 oscillation cycles. Results show that relevant features of the laryngeal flow field, such as glottal jet deflection, develop even when applying rather simple static models with oscillating flow rates. Including further phonatory components such as vocal fold motion, mucosal wave propagation, and ventricular folds, the simulations show phonatory key features like intraglottal flow separation and increased flow rate in presence of ventricular folds. The simulation time on 100 CPU cores ranged between 25 and 290 hours, currently restricting clinical application of these models. Nevertheless, results show high potential of numerical simulations for better understanding of phonatory process. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  5. Regenerating computer model of the thymus

    International Nuclear Information System (INIS)

    Lumb, J.R.

    1975-01-01

    This computer model simulates the cell population kinetics of the development and later degeneration of the thymus. Nutritional factors are taken into account by the growth of blood vessels in the simulated thymus. The stem cell population is kept at its maximum by allowing some stem cells to divide into two stem cells until the population reaches its maximum, thus regenerating the thymus after an insult such as irradiation. After a given number of population doublings the maximum allowed stem cell population is gradually decreased in order to simulate the degeneration of the thymus. Results show that the simulated thymus develops and degenerates in a pattern similar to that of the natural thymus. This simulation is used to evaluate cellular kinetic data for the the thymus. The results from testing the internal consistency of available data are reported. The number of generations which most represents the natural thymus includes seven dividing generations of lymphocytes and one mature, nondividing generation of small lymphocytes. The size of the resulting developed thymus can be controlled without affecting other variables by changing the maximum stem cell population allowed. In addition, recovery from irradiation is simulated

  6. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  8. Computational Strategies for the Architectural Design of Bending Active Structures

    DEFF Research Database (Denmark)

    Tamke, Martin; Nicholas, Paul

    2013-01-01

    Active bending introduces a new level of integration into the design of architectural structures, and opens up new complexities for the architectural design process. In particular, the introduction of material variation reconfigures the design space. Through the precise specification...... of their stiffness, it is possible to control and pre-calibrate the bending behaviour of a composite element. This material capacity challenges architecture’s existing methods for design, specification and prediction. In this paper, we demonstrate how architects might connect the designed nature of composites...... with the design of bending-active structures, through computational strategies. We report three built structures that develop architecturally oriented design methods for bending-active systems using composite materials. These projects demonstrate the application and limits of the introduction of advanced...

  9. Computational Strategies for the Architectural Design of Bending Active Structures

    DEFF Research Database (Denmark)

    Tamke, Martin; Nicholas, Paul

    2013-01-01

    with the design of bending-active structures, through computational strategies. We report three built structures that develop architecturally oriented design methods for bending-active systems using composite materials. These projects demonstrate the application and limits of the introduction of advanced......Active bending introduces a new level of integration into the design of architectural structures, and opens up new complexities for the architectural design process. In particular, the introduction of material variation reconfigures the design space. Through the precise specification...... of their stiffness, it is possible to control and pre-calibrate the bending behaviour of a composite element. This material capacity challenges architecture’s existing methods for design, specification and prediction. In this paper, we demonstrate how architects might connect the designed nature of composites...

  10. Models of protein-ligand crystal structures: trust, but verify

    Science.gov (United States)

    Deller, Marc C.; Rupp, Bernhard

    2015-09-01

    X-ray crystallography provides the most accurate models of protein-ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein-ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein-ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein-ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein-ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein-ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein-ligand models for their computational and biological studies, and we provide an overview of how this can be achieved.

  11. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  12. Computational Study of Colloidal Droplet Interactions with Three Dimensional Structures

    Science.gov (United States)

    2015-05-18

    create a novel multiphysics model that enables the prediction of colloidal droplet interactions with complex porous structures; (b) advance the...diameter and penetration depth. (2) A model for the transport and deposition of nanoparticles in the porous matrix during droplet sorption was...process. The main research goals of this proposal are to (a) create a novel multiphysics model that enables the prediction of colloidal droplet

  13. Carbody structural lightweighting based on implicit parameterized model

    Science.gov (United States)

    Chen, Xin; Ma, Fangwu; Wang, Dengfeng; Xie, Chen

    2014-05-01

    Most of recent research on carbody lightweighting has focused on substitute material and new processing technologies rather than structures. However, new materials and processing techniques inevitably lead to higher costs. Also, material substitution and processing lightweighting have to be realized through body structural profiles and locations. In the huge conventional workload of lightweight optimization, model modifications involve heavy manual work, and it always leads to a large number of iteration calculations. As a new technique in carbody lightweighting, the implicit parameterization is used to optimize the carbody structure to improve the materials utilization rate in this paper. The implicit parameterized structural modeling enables the use of automatic modification and rapid multidisciplinary design optimization (MDO) in carbody structure, which is impossible in the traditional structure finite element method (FEM) without parameterization. The structural SFE parameterized model is built in accordance with the car structural FE model in concept development stage, and it is validated by some structural performance data. The validated SFE structural parameterized model can be used to generate rapidly and automatically FE model and evaluate different design variables group in the integrated MDO loop. The lightweighting result of body-in-white (BIW) after the optimization rounds reveals that the implicit parameterized model makes automatic MDO feasible and can significantly improve the computational efficiency of carbody structural lightweighting. This paper proposes the integrated method of implicit parameterized model and MDO, which has the obvious practical advantage and industrial significance in the carbody structural lightweighting design.

  14. Optimization of mathematical models for soil structure interaction

    International Nuclear Information System (INIS)

    Vallenas, J.M.; Wong, C.K.; Wong, D.L.

    1993-01-01

    Accounting for soil-structure interaction in the design and analysis of major structures for DOE facilities can involve significant costs in terms of modeling and computer time. Using computer programs like SASSI for modeling major structures, especially buried structures, requires the use of models with a large number of soil-structure interaction nodes. The computer time requirements (and costs) increase as a function of the number of interaction nodes to the third power. The added computer and labor cost for data manipulation and post-processing can further increase the total cost. This paper provides a methodology to significantly reduce the number of interaction nodes. This is achieved by selectively increasing the thickness of soil layers modeled based on the need for the mathematical model to capture as input only those frequencies that can actually be transmitted by the soil media. The authors have rarely found that a model needs to capture frequencies as high as 33 Hz. Typically coarser meshes (and a lesser number of interaction nodes) are adequate

  15. COMPUTER MODELLING OF ENERGY SAVING EFFECTS

    Directory of Open Access Journals (Sweden)

    Marian JANCZAREK

    2016-09-01

    Full Text Available The paper presents the analysis of the dynamics of the heat transfer through the outer wall of the thermal technical spaces, taking into account the impact of the sinusoidal nature of the changes in atmospheric temperature. These temporal variations of the input on the outer surface of the chamber divider result at the output of the sinusoidal change on the inner wall of the room, but suitably suppressed and shifted in phase. Properly selected phase shift is clearly important for saving energy used for the operation associated with the maintenance of a specific regime of heat inside the thermal technical chamber support. Laboratory tests of the model and the actual object allowed for optimal design of the chamber due to the structure of the partition as well as due to the orientation of the geographical location of the chamber.

  16. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  17. Cloud Computing Adoption Business Model Factors: Does Enterprise Size Matter?

    OpenAIRE

    Bogataj Habjan, Kristina; Pucihar, Andreja

    2017-01-01

    This paper presents the results of research investigating the impact of business model factors on cloud computing adoption. The introduced research model consists of 40 cloud computing business model factors, grouped into eight factor groups. Their impact and importance for cloud computing adoption were investigated among enterpirses in Slovenia. Furthermore, differences in opinion according to enterprise size were investigated. Research results show no statistically significant impacts of in...

  18. Graph Partitioning Models for Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  19. Computational Intelligence Agent-Oriented Modelling

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    2006-01-01

    Roč. 5, č. 2 (2006), s. 430-433 ISSN 1109-2777 R&D Projects: GA MŠk 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : multi-agent systems * adaptive agents * computational intelligence Subject RIV: IN - Informatics, Computer Science

  20. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.