WorldWideScience

Sample records for model structure computation

  1. Constructive modelling of structural turbulence: computational experiment

    Energy Technology Data Exchange (ETDEWEB)

    Belotserkovskii, O M; Oparin, A M; Troshkin, O V [Institute for Computer Aided Design, Russian Academy of Sciences, Vtoraya Brestskaya st., 19/18, Moscow, 123056 (Russian Federation); Chechetkin, V M [Keldysh Institute for Applied Mathematics, Russian Academy of Sciences, Miusskaya sq., 4, Moscow, 125047 (Russian Federation)], E-mail: o.bel@icad.org.ru, E-mail: a.oparin@icad.org.ru, E-mail: troshkin@icad.org.ru, E-mail: chech@gin@keldysh.ru

    2008-12-15

    Constructively, the analysis of the phenomenon of turbulence must and can be performed through direct numerical simulations of mechanics supposed to be inherent to secondary flows. This one reveals itself through such instances as large vortices, structural instabilities, vortex cascades and principal modes discussed in this paper. Like fragments of a puzzle, they speak of a motion ordered with its own nuts and bolts, however chaotic it appears at first sight. This opens an opportunity for a multi-oriented approach of which a prime ideology seems to be a rational combination of grid, spectral and statistical methods. An attempt is made to bring together the above instances and produce an alternative point of view on the phenomenon in question when based on the main laws of conservation.

  2. Improved Computational Model of Grid Cells Based on Column Structure

    Institute of Scientific and Technical Information of China (English)

    Yang Zhou; Dewei Wu; Weilong Li; Jia Du

    2016-01-01

    To simulate the firing pattern of biological grid cells, this paper presents an improved computational model of grid cells based on column structure. In this model, the displacement along different directions is processed by modulus operation, and the obtained remainder is associated with firing rate of grid cell. Compared with the original model, the improved parts include that: the base of modulus operation is changed, and the firing rate in firing field is encoded by Gaussian⁃like function. Simulation validates that the firing pattern generated by the improved computational model is more consistent with biological characteristic than original model. Besides, the firing pattern is badly influenced by the cumulative positioning error, but the computational model can also generate the regularly hexagonal firing pattern when the real⁃time positioning results are modified.

  3. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...... encompassing the main core components and focusing on their interdependencies. The system should be as simple as possible, while producing relevant structural analyses on a large variety of music. This paper describes the general principles of a computational framework for music analysis currently under...

  4. Computational modelling of cohesive cracks in material structures

    Science.gov (United States)

    Vala, J.; Jarošová, P.

    2016-06-01

    Analysis of crack formation, considered as the creation of new surfaces in a material sample due to its microstructure, leads to nontrivial physical, mathematical and computational difficulties even in the rather simple case of quasistatic cohesive zone modelling inside the linear elastic theory. However, quantitative results from such evaluations are required in practice for the development and design of advanced materials, structures and technologies. Although most available software tools apply ad hoc computational predictions, this paper presents the proper formulation of such model problem, including its verification, and sketches the more-scale construction of finite-dimensional approximation of solutions, utilizing the finite element or similar techniques, together with references to original simulations results from engineering practice.

  5. A model for ensemble NMR quantum computer using antiferromagnetic structure

    CERN Document Server

    Kokin, A A

    2000-01-01

    The one-dimensional homonuclear periodic array of nuclear spins I = 1/2,owing to hyperfine interaction of nuclear spins with electronic magneticmoments in antiferromagnetic structure, is considered. The neighbor nuclearspins in such array are opposite oriented and have resonant frequenciesdetermined by hyperfine interaction constant, applied magnetic field value andinteraction with the left and right nuclear neighbor spins. The resonantfrequencies difference of nuclear spins, when the neighbor spins have differentand the same states, is used to control the spin dynamics by means of selectiveresonant RF-pulses both for single nuclear spins and for ensemble of nuclearspins with the same resonant frequency. A model for the NMR quantum computer of cellular-automata type based on anone-dimensional homonuclear periodic array of spins is proposed. This model maybe generalized to a large ensemble of parallel working one-dimensional arraysand to two-dimensional and three-dimensional structures.

  6. Computational modeling and impact analysis of textile composite structures

    Science.gov (United States)

    Hur, Hae-Kyu

    This study is devoted to the development of an integrated numerical modeling enabling one to investigate the static and the dynamic behaviors and failures of 2-D textile composite as well as 3-D orthogonal woven composite structures weakened by cracks and subjected to static-, impact- and ballistic-type loads. As more complicated modeling about textile composite structures is introduced, some of homogenization schemes, geometrical modeling and crack propagations become more difficult problems to solve. To overcome these problems, this study presents effective mesh-generation schemes, homogenization modeling based on a repeating unit cell and sinusoidal functions, and also a cohesive element to study micro-crack shapes. This proposed research has two: (1) studying behavior of textile composites under static loads, (2) studying dynamic responses of these textile composite structures subjected to the transient/ballistic loading. In the first part, efficient homogenization schemes are suggested to show the influence of textile architectures on mechanical characteristics considering the micro modeling of repeating unit cell. Furthermore, the structures of multi-layered or multi-phase composites combined with different laminar such as a sub-laminate, are considered to find the mechanical characteristics. A simple progressive failure mechanism for the textile composites is also presented. In the second part, this study focuses on three main phenomena to solve the dynamic problems: micro-crack shapes, textile architectures and textile effective moduli. To obtain a good solutions of the dynamic problems, this research attempts to use four approaches: (I) determination of governing equations via a three-level hierarchy: micro-mechanical unit cell analysis, layer-wise analysis accounting for transverse strains and stresses, and structural analysis based on anisotropic plate layers, (II) development of an efficient computational approach enabling one to perform transient

  7. Computer generation of structural models of amorphous Si and Ge

    Science.gov (United States)

    Wooten, F.; Winer, K.; Weaire, D.

    1985-04-01

    We have developed and applied a computer algorithm that generates realistic random-network models of a-Si with periodic boundary conditions. These are the first models to have correlation functions that show no serious deiscrepancy with experiment. The algorithm provides a much-needed systematic approach to model construction that can be used to generate models of a large class of amorphous materials.

  8. COMPUTER SIMULATION OF ANTIFERROMAGNETIC STRUCTURES DESCRIBED BY THE THREE-VERTEX ANTIFERROMAGNETIC POTTS MODEL

    National Research Council Canada - National Science Library

    Yarash K. Abuev; Albert B. Babaev; Pharkhat E. Esetov

    2017-01-01

    Objectives A computer simulation of the antiferromagnetic structures described by the three-vertex Potts model on a triangular lattice is performed, taking into account the antiferromagnetic exchange...

  9. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  10. Computational Tools for Modeling and Measuring Chromosome Structure

    Science.gov (United States)

    Ross, Brian Christopher

    DNA conformation within cells has many important biological implications, but there are challenges both in modeling DNA due to the need for specialized techniques, and experimentally since tracing out in vivo conformations is currently impossible. This thesis contributes two computational projects to these efforts. The first project is a set of online and offline calculators of conformational statistics using a variety of published and unpublished methods, addressing the current lack of DNA model-building tools intended for general use. The second project is a reconstructive analysis that could enable in vivo mapping of DNA conformation at high resolution with current experimental technology. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  11. Using computational models to relate structural and functional brain connectivity.

    Science.gov (United States)

    Hlinka, Jaroslav; Coombes, Stephen

    2012-07-01

    Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson-Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph-theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics.

  12. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    Science.gov (United States)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2016-11-01

    Within the continuum mechanics framework, there are two main approaches to model interfaces: classical cohesive zone modeling (CZM) and interface elasticity theory. The classical CZM deals with geometrically non-coherent interfaces for which the constitutive relation is expressed in terms of traction-separation laws. However, CZM lacks any response related to the stretch of the mid-plane of the interface. This issue becomes problematic particularly at small scales with increasing interface area to bulk volume ratios, where interface elasticity is no longer negligible. The interface elasticity theory, in contrast to CZM, deals with coherent interfaces that are endowed with their own energetic structures, and thus is capable of capturing elastic resistance to tangential stretch. Nonetheless, the interface elasticity theory suffers from the lack of inelastic material response, regardless of the strain level. The objective of this contribution therefore is to introduce a generalized mechanical interface model that couples both the elastic response along the interface and the cohesive response across the interface whereby interface degradation is taken into account. The material degradation of the interface mid-plane is captured by a non-local damage model of integral-type. The out-of-plane decohesion is described by a classical cohesive zone model. These models are then coupled through their corresponding damage variables. The non-linear governing equations and the weak forms thereof are derived. The numerical implementation is carried out using the finite element method and consistent tangents are derived. Finally, a series of numerical examples is studied to provide further insight into the problem and to carefully elucidate key features of the proposed theory.

  13. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  14. Computational modeling of fluid structural interaction in arterial stenosis

    Science.gov (United States)

    Bali, Leila; Boukedjane, Mouloud; Bahi, Lakhdar

    2013-12-01

    Atherosclerosis affects the arterial blood vessels causing stenosis because of which the artery hardens resulting in loss of elasticity in the affected region. In this paper, we present: an approach to model the fluid-structure interaction through such an atherosclerosis affected region of the artery, The blood is assumed as an incompressible Newtonian viscous fluid, and the vessel wall was treated as a thick-walled, incompressible and isotropic material with uniform mechanical properties. The numerical simulation has been studied in the context of The Navier-Stokes equations for an interaction with an elastic solid. The study of fluid flow and wall motion was initially carried out separately, Discretized forms of the transformed wall and flow equations, which are coupled through the boundary conditions at their interface, are obtained by control volume method and simultaneously to study the effects of wall deformability, solutions are obtained for both rigid and elastic walls. The results indicate that deformability of the wall causes an increase in the time average of pressure drop, but a decrease in the maximum wall shear stress. Displacement and stress distributions in the wall are presented.

  15. Computational Methods for Protein Structure Prediction and Modeling Volume 2: Structure Prediction

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume 2 of this two-volume sequence focuses on protein structure prediction and includes protein threading, De novo methods, applications to membrane proteins and protein complexes, structure-based drug design, as well as structure prediction as a systems problem. A series of appendices review the biological and chemical basics related to protein structure, computer science for structural informatics, and prerequisite mathematics and statistics.

  16. Structure and models of artifactual routine design for computational synthesis

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Tragter, Hendrik; van Houten, Frederikus J.A.M.

    2009-01-01

    Computational synthesis (CS) researches the automatic generation of solutions to design problems. The aim is to shorten design times and present the user with multiple design solutions. However, initializing a new CS process has not received much attention in literature. With this motivation, this p

  17. Connecting Protein Structure to Intermolecular Interactions: A Computer Modeling Laboratory

    Science.gov (United States)

    Abualia, Mohammed; Schroeder, Lianne; Garcia, Megan; Daubenmire, Patrick L.; Wink, Donald J.; Clark, Ginevra A.

    2016-01-01

    An understanding of protein folding relies on a solid foundation of a number of critical chemical concepts, such as molecular structure, intra-/intermolecular interactions, and relating structure to function. Recent reports show that students struggle on all levels to achieve these understandings and use them in meaningful ways. Further, several…

  18. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  19. Computational Methods for Protein Structure Prediction and Modeling Volume 1: Basic Characterization

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume one of this two volume sequence focuses on the basic characterization of known protein structures as well as structure prediction from protein sequence information. The 11 chapters provide an overview of the field, covering key topics in modeling, force fields, classification, computational methods, and struture prediction. Each chapter is a self contained review designed to cover (1) definition of the problem and an historical perspective, (2) mathematical or computational formulation of the problem, (3) computational methods and algorithms, (4) performance results, (5) existing software packages, and (6) strengths, pitfalls, challenges, and future research directions.

  20. Computational methods toward accurate RNA structure prediction using coarse-grained and all-atom models.

    Science.gov (United States)

    Krokhotin, Andrey; Dokholyan, Nikolay V

    2015-01-01

    Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.

  1. Impact of modeling fluid-structure interaction in the computational analysis of aortic root biomechanics.

    Science.gov (United States)

    Sturla, Francesco; Votta, Emiliano; Stevanella, Marco; Conti, Carlo A; Redaelli, Alberto

    2013-12-01

    Numerical modeling can provide detailed and quantitative information on aortic root (AR) biomechanics, improving the understanding of AR complex pathophysiology and supporting the development of more effective clinical treatments. From this standpoint, fluid-structure interaction (FSI) models are currently the most exhaustive and potentially realistic computational tools. However, AR FSI modeling is extremely challenging and computationally expensive, due to the explicit simulation of coupled AR fluid dynamics and structural response, while accounting for complex morphological and mechanical features. We developed a novel FSI model of the physiological AR simulating its function throughout the entire cardiac cycle. The model includes an asymmetric MRI-based geometry, the description of aortic valve (AV) non-linear and anisotropic mechanical properties, and time-dependent blood pressures. By comparison to an equivalent finite element structural model, we quantified the balance between the extra information and the extra computational cost associated with the FSI approach. Tissue strains and stresses computed through the two approaches did not differ significantly. The FSI approach better captured the fast AV opening and closure, and its interplay with blood fluid dynamics within the Valsalva sinuses. It also reproduced the main features of in vivo AR fluid dynamics. However, the FSI simulation was ten times more computationally demanding than its structural counterpart. Hence, the FSI approach may be worth the extra computational cost when the tackled scenarios are strongly dependent on AV transient dynamics, Valsalva sinuses fluid dynamics in relation to coronary perfusion (e.g. sparing techniques), or AR fluid dynamic alterations (e.g. bicuspid AV).

  2. An Iterative Inversion Technique to Compute Structural Martian Models for Refining Event Locations

    Science.gov (United States)

    Ceylan, S.; Khan, A.; van Driel, M.; Clinton, J. F.; Boese, M.; Euchner, F.; Giardini, D.; Garcia, R.; Lognonne, P. H.; Panning, M. P.; Banerdt, W. B.

    2016-12-01

    The InSight mission will deploy a single seismic station on Mars in 2018. The main task of the MarsQuake Service within the project includes detecting and locating quakes on Mars, and managing the event catalog. In preparation for the mission, we continually calibrate single station event location algorithms, employing seismic phase travel times computed for a suite of structural models. However, our knowledge about the interior structure of Mars is limited, which in turn will affect our ability to locate events accurately. Here, we present an iterative method to invert for the interior structure of Mars and revise event locations, consecutively. We first locate seismic events using differential arrival times (with respect to the first phase arrival) of all possible seismic phases, computed for a priori initial structural models. These models are built considering a one-dimensional average crust and current estimates of bulk mantle chemistry and areotherm. Phase picks and uncertainty assignments are done manually. Then, we invert for the interior structure employing the arrival times for the picked phases, and generate an updated suite of models, which are further used to revise the initial phase picks, and relocate events. We repeat this sequence for each additional and new entry in the travel time database to improve event locations and models for average Martian structure. In order to test our approach, we simulate the operational conditions we will encounter in practice: We compute synthetic waveforms for a realistic event catalog of 120 events, with magnitudes between 2.5 and 5.0 and double-couple source mechanisms only. 1-Hz seismograms are computed using AxiSEM and Instaseis, employing two Martian models with a thin (30 km) and thick (80 km) crust, both with and without seismic surface noise. The waveforms are hosted at the ETH servers, and are publicly accessible via FDSN web services.

  3. Integrating aerodynamic surface modeling for computational fluid dynamics with computer aided structural analysis, design, and manufacturing

    Science.gov (United States)

    Thorp, Scott A.

    1992-01-01

    This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.

  4. Aircraft vulnerability modeling and computation methods based on product structure and CATIA

    Institute of Scientific and Technical Information of China (English)

    Li Jun; Yang Wei; Zhang Yugang; Pei Yang; Ren Yunsong; Wang Wei

    2013-01-01

    Survivability strengthening/vulnerability reduction designs have become one of the most important design disciplines of military aircraft now.Due to progressiveness and complexity of modern combat aircraft,the existing vulnerability modeling and computation methods cannot meet the current engineering application requirements.Therefore,a vulnerability modeling and computation method based on product structure and CATIA is proposed in sufficient consideration of the design characteristics of modern combat aircraft.This method directly constructs the aircraft vulnerability model by CATIA or the digital model database,and manages all the product components of the vulnerability model via aircraft product structure.Using CAA second development,the detailed operations and computation methods of vulnerability analysis are integrated into CATIA software environment.Comprehensive assessment data and visual kill probability Iso-contours can also be presented,which meet the vulnerability analysis requirements of modern combat aircraft effectively.The intact vulnerability model of one hypothetical aircraft is constructed,and the effects of redundant technology to the aircraft vulnerability are assessed,which validate the engineering practicality of the method.

  5. A combined computational and structural model of the full-length human prolactin receptor

    DEFF Research Database (Denmark)

    Bugge, Katrine Østergaard; Papaleo, Elena; Haxholm, Gitte Wolfsberg;

    2016-01-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target...... for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small......-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than...

  6. A combined computational and structural model of the full-length human prolactin receptor

    Science.gov (United States)

    Bugge, Katrine; Papaleo, Elena; Haxholm, Gitte W.; Hopper, Jonathan T. S.; Robinson, Carol V.; Olsen, Johan G.; Lindorff-Larsen, Kresten; Kragelund, Birthe B.

    2016-05-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg.

  7. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  8. A novel DNA computing model based on RecA-mediated triple-stranded DNA structure

    Institute of Scientific and Technical Information of China (English)

    Fang Gang; Zhang Shemin; Dong Yafei; Xu Jin

    2007-01-01

    The field of DNA computing emerged in 1994 after Adleman's paper was published. Henceforth, a few scholars solved some noted NP-complete problems in this way. And all these methods of DNA computing are based on conventional Watson-Crick hydrogen bond of doublehelical DNA molecule. In this paper, we show that the triple-stranded DNA structure mediated by RecA protein can be used for solving computational problems. Sequence-specific recognition of double-stranded DNA by oligonucleotide-directed triple helix (triplex) formation is used to carry out the algorithm. We present procedure for the 3-vertex-colorability problems. In our proposed procedure, it is suggested that it is possible to solve more complicated problems with more variables by this model.

  9. Flow-Structure-Acoustic Interaction Computational Modeling of Voice Production inside an Entire Airway

    Science.gov (United States)

    Jiang, Weili; Zheng, Xudong; Xue, Qian

    2015-11-01

    Human voice quality is directly determined by the interplay of dynamic behavior of glottal flow, vibratory characteristics of VFs and acoustic characteristics of upper airway. These multiphysics constituents are tightly coupled together and precisely coordinate to produce understandable sound. Despite many years' research effort, the direct relationships among the detailed flow features, VF vibration and aeroacoustics still remains elusive. This study utilizes a first-principle based, flow-structure-acoustics interaction computational modeling approach to study the process of voice production inside an entire human airway. In the current approach, a sharp interface immersed boundary method based incompressible flow solver is utilized to model the glottal flow; A finite element based solid mechanics solver is utilized to model the vocal vibration; A high-order immersed boundary method based acoustics solver is utilized to directly compute sound. These three solvers are fully coupled to mimic the complex flow-structure-acoustic interaction during voice production. The geometry of airway is reconstructed based on the in-vivo MRI measurement reported by Story et al. (1995) and a three-layer continuum based vocal fold model is taken from Titze and Talkin (1979). Results from these simulations will be presented and further analyzed to get new insight into the complex flow-structure-acoustic interaction during voice production. This study is expected to improve the understanding of fundamental physical mechanism of voice production and to help to build direct cause-effect relationship between biomechanics and voice sound.

  10. Computational Modelling of the Structural Integrity following Mass-Loss in Polymeric Charred Cellular Solids

    Directory of Open Access Journals (Sweden)

    J. P. M. Whitty

    2014-01-01

    Full Text Available A novel computational technique is presented for embedding mass-loss due to burning into the ANSYS finite element modelling code. The approaches employ a range of computational modelling methods in order to provide more complete theoretical treatment of thermoelasticity absent from the literature for over six decades. Techniques are employed to evaluate structural integrity (namely, elastic moduli, Poisson’s ratios, and compressive brittle strength of honeycomb systems known to approximate three-dimensional cellular chars. That is, reducing the mass of diagonal ribs and both diagonal-plus-vertical ribs simultaneously show rapid decreases in the structural integrity of both conventional and reentrant (auxetic, i.e., possessing a negative Poisson’s ratio honeycombs. On the other hand, reducing only the vertical ribs shows initially modest reductions in such properties, followed by catastrophic failure of the material system. Calculations of thermal stress distributions indicate that in all cases the total stress is reduced in reentrant (auxetic cellular solids. This indicates that conventional cellular solids are expected to fail before their auxetic counterparts. Furthermore, both analytical and FE modelling predictions of the brittle crush strength of both auxteic and conventional cellular solids show a relationship with structural stiffness.

  11. Computationally Efficient Modelling of Dynamic Soil-Structure Interaction of Offshore Wind Turbines on Gravity Footings

    DEFF Research Database (Denmark)

    Damgaard, Mads; Andersen, Lars Vabbersgaard; Ibsen, Lars Bo

    2014-01-01

    The formulation and quality of a computationally efficient model of offshore wind turbine surface foundations is examined. The aim is to establish a model, workable in the frequency and time domain, that can be applied in aeroelastic codes for fast and reliable evaluation of the dynamic structural...... response of wind turbines, in which the geometrical dissipation related to wave propagation into the subsoil is included. Based on the optimal order of a consistent lumped-parameter model obtained by the domain-transformation method and a weighted least-squares technique, the dynamic vibration response...... to wave propagating in the subsoil–even for soil stratifications with low cut-in frequencies. In this regard, utilising discrete second-order models for the physical interpretation of a rational filter puts special demands on the Newmark β-scheme, where the time integration in most cases only provides...

  12. Mirrored Language Structure and Innate Logic of the Human Brain as a Computable Model of the Oracle Turing Machine

    CERN Document Server

    Wen, Han Xiao

    2010-01-01

    We wish to present a mirrored language structure (MLS) and four logic rules determined by this structure for the model of a computable Oracle Turing machine. MLS has novel features that are of considerable biological and computational significance. It suggests an algorithm of relation learning and recognition (RLR) that enables the deterministic computers to simulate the mechanism of the Oracle Turing machine, or P = NP in a mathematical term.

  13. Discrete computational structures

    CERN Document Server

    Korfhage, Robert R

    1974-01-01

    Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize

  14. COMPUTER SIMULATION OF ANTIFERROMAGNETIC STRUCTURES DESCRIBED BY THE THREE-VERTEX ANTIFERROMAGNETIC POTTS MODEL

    Directory of Open Access Journals (Sweden)

    Yarash K. Abuev

    2017-01-01

    Full Text Available Abstract. Objectives A computer simulation of the antiferromagnetic structures described by the three-vertex Potts model on a triangular lattice is performed, taking into account the antiferromagnetic exchange interactions between the nearest J1 and second J2 neighbours. The main goal of the computer simulation was to elucidate the effects of ground state and areas of frustration on the thermodynamic and magnetic properties of antiferromagnetic structures described by the lowdimensional Potts model. Method The computer simulation is based on the Monte Carlo method. This method is implemented using the Metropolis algorithm in combination with the Wolff claster algorithm. The computer simulation was carried out for low-dimensional systems with periodic boundary conditions and linear dimensions L = 24124. Results On the basis of heat capacity and entropy analysis, phase transitions were observed in the considered model to possess exchange interaction parameters J1 <0 and J2 <0 in the variation intervals 0r<0.2 and 1.0structure ground state; frustrations are additionally observed in the interval under consideration. On the basis of the

  15. Mathematical structures for computer graphics

    CERN Document Server

    Janke, Steven J

    2014-01-01

    A comprehensive exploration of the mathematics behind the modeling and rendering of computer graphics scenes Mathematical Structures for Computer Graphics presents an accessible and intuitive approach to the mathematical ideas and techniques necessary for two- and three-dimensional computer graphics. Focusing on the significant mathematical results, the book establishes key algorithms used to build complex graphics scenes. Written for readers with various levels of mathematical background, the book develops a solid foundation for graphics techniques and fills in relevant grap

  16. A critical role for network structure in seizure onset: a computational modelling approach

    Directory of Open Access Journals (Sweden)

    George ePetkov

    2014-12-01

    Full Text Available Recent clinical work has implicated network structure as critically important in the initiation of seizures in people with idiopathic generalized epilepsies. In line with this idea, functional networks derived from the electroencephalogram (EEG at rest have been shown to be significantly different in people with generalized epilepsy compared to controls. In particular, the mean node degree of networks from the epilepsy cohort was found to be statistically significantly higher than those of controls. However, the mechanisms by which these network differences can support recurrent transitions into seizures remain unclear. In this study we use a computational model of the transition into seizure dynamics to explore the dynamic consequences of these differences in functional networks. We demonstrate that networks with higher mean node degree are more prone to generating seizure dynamics in the model and therefore suggest a mechanism by which increased mean node degree of brain networks can cause heightened ictogenicity.

  17. Methods, Computational Platform, Verification, and Application of Earthquake-Soil-Structure-Interaction Modeling and Simulation

    Science.gov (United States)

    Tafazzoli, Nima

    Seismic response of soil-structure systems has attracted significant attention for a long time. This is quite understandable with the size and the complexity of soil-structure systems. The focus of three important aspects of ESSI modeling could be on consistent following of input seismic energy and a number of energy dissipation mechanisms within the system, numerical techniques used to simulate dynamics of ESSI, and influence of uncertainty of ESSI simulations. This dissertation is a contribution to development of one such tool called ESSI Simulator. The work is being done on extensive verified and validated suite for ESSI Simulator. Verification and validation are important for high fidelity numerical predictions of behavior of complex systems. This simulator uses finite element method as a numerical tool to obtain solutions for large class of engineering problems such as liquefaction, earthquake-soil-structure-interaction, site effect, piles, pile group, probabilistic plasticity, stochastic elastic-plastic FEM, and detailed large scale parallel models. Response of full three-dimensional soil-structure-interaction simulation of complex structures is evaluated under the 3D wave propagation. Domain-Reduction-Method is used for applying the forces as a two-step procedure for dynamic analysis with the goal of reducing the large size computational domain. The issue of damping of the waves at the boundary of the finite element models is studied using different damping patterns. This is used at the layer of elements outside of the Domain-Reduction-Method zone in order to absorb the residual waves coming out of the boundary layer due to structural excitation. Extensive parametric study is done on dynamic soil-structure-interaction of a complex system and results of different cases in terms of soil strength and foundation embedment are compared. High efficiency set of constitutive models in terms of computational time are developed and implemented in ESSI Simulator

  18. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    CERN Document Server

    Heller, Johann; Van Rienen, Ursula; 10.1016/j.phpro.2015.11.060

    2015-01-01

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  19. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    Science.gov (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  20. A computational model of the hippocampus that represents environmental structure and goal location, and guides movement.

    Science.gov (United States)

    Matsumoto, Jumpei; Makino, Yoshinari; Miura, Haruki; Yano, Masafumi

    2011-08-01

    Hippocampal place cells (PCs) are believed to represent environmental structure. However, it is unclear how and which brain regions represent goals and guide movements. Recently, another type of cells that fire around a goal was found in rat hippocampus (we designate these cells as goal place cells, GPCs). This suggests that the hippocampus is also involved in goal representation. Assuming that the activities of GPCs depend on the distance to a goal, we propose an adaptive navigation model. By monitoring the population activity of GPCs, the model navigates to shorten the distance to the goal. To achieve the distance-dependent activities of GPCs, plastic connections are assumed between PCs and GPCs, which are modified depending on two reward-triggered activities: activity propagation through PC-PC network representing the topological environmental structure, and the activity of GPCs with different durations. The former activity propagation is regarded as a computational interpretation of "reverse replay" phenomenon found in rat hippocampus. Simulation results confirm that after reaching a goal only once, the model can navigate to the goal along almost the shortest path from arbitrary places in the environment. This indicates that the hippocampus might play a primary role in the representation of not only the environmental structure but also the goal, in addition to guiding the movement. This navigation strategy using the population activity of GPCs is equivalent to the taxis strategy, the simplest and most basic for biological systems. Our model is unique because this simple strategy allows the model to follow the shortest path in the topological map of the environment.

  1. Computation and Spacetime Structure

    CERN Document Server

    Stannett, Mike

    2011-01-01

    We investigate the relationship between computation and spacetime structure, focussing on the role of closed timelike curves (CTCs) in promoting computational speedup. We note first that CTC traversal can be interpreted in two distinct ways, depending on ones understanding of spacetime. Focussing on one interpretation leads us to develop a toy universe in which no CTC can be traversed more than once, whence no computational speedup is possible. Focussing on the second (and more standard) interpretation leads to the surprising conclusion that CTCs act as perfect information repositories: just as black holes have entropy, so do CTCs. If we also assume that P is not equal to NP, we find that all observers agree that, even if unbounded time travel existed in their youth, this capability eventually vanishes as they grow older. Thus the computational assumption "P is not NP" is also an assumption concerning cosmological structure.

  2. 3D modeling method for computer animate based on modified weak structured light method

    Science.gov (United States)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  3. Computational and theoretical modeling of intermediate filament networks:Structure, mechanics and disease

    Institute of Scientific and Technical Information of China (English)

    Zhao Qin; Markus J. Buehler

    2012-01-01

    Intermediate filaments,in addition to microtubules and actin microfilaments,are one of the three major components of the cytoskeleton in eukaryotic cells.It was discovered during the recent decades that in most cells,intermediate filament proteins play key roles to reinforce cells subjected to large-deformation,and that they participate in signal transduction,and it was proposed that their nanomechanical properties are critical to perform those functions.However,it is still poorly understood how the nanoscopic structure,as well as the combination of chemical composition,molecular structure and interfacial properties of these protein molecules contribute to the biomechanical properties of filaments and filament networks. Here we review recent progress in computational and theoretical studies of the intermediate filaments network at various levels in the protein's structure. A multiple scale method is discussed,used to couple molecular modeling with atomistic detail to larger-scale material properties of the networked material. It is shown that a finer-trains-coarser methodology as discussed here provides a useful tool in understanding the biomechanical property and disease mechanism of intermediate filaments,coupling experiment and simulation. It further allows us to improve the understanding of associated disease mechanisms and lays the foundation for engineering the mechanical properties of biomaterials.

  4. Absorbed dose evaluation based on a computational voxel model incorporating distinct cerebral structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Samia de Freitas; Trindade, Bruno; Campos, Tarcisio P.R. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)]. E-mail: samiabrandao@gmail.com; bmtrindade@yahoo.com; campos@nuclear.ufmg.br

    2007-07-01

    Brain tumors are quite difficult to treat due to the collateral radiation damages produced on the patients. Despite of the improvements in the therapeutics protocols for this kind of tumor, involving surgery and radiotherapy, the failure rate is still extremely high. This fact occurs because tumors can not often be totally removed by surgery since it may produce some type of deficit in the cerebral functions. Radiotherapy is applied after the surgery, and both are palliative treatments. During radiotherapy the brain does not absorb the radiation dose in homogeneous way, because the various density and chemical composition of tissues involved. With the intention of evaluating better the harmful effects caused by radiotherapy it was developed an elaborated cerebral voxel model to be used in computational simulation of the irradiation protocols of brain tumors. This paper presents some structures function of the central nervous system and a detailed cerebral voxel model, created in the SISCODES program, considering meninges, cortex, gray matter, white matter, corpus callosum, limbic system, ventricles, hypophysis, cerebellum, brain stem and spinal cord. The irradiation protocol simulation was running in the MCNP5 code. The model was irradiated with photons beam whose spectrum simulates a linear accelerator of 6 MV. The dosimetric results were exported to SISCODES, which generated the isodose curves for the protocol. The percentage isodose curves in the brain are present in this paper. (author)

  5. Interactive effects of explicit emergent structure: a major challenge for cognitive computational modeling.

    Science.gov (United States)

    French, Robert M; Thomas, Elizabeth

    2015-04-01

    David Marr's (1982) three-level analysis of computational cognition argues for three distinct levels of cognitive information processing-namely, the computational, representational, and implementational levels. But Marr's levels are-and were meant to be-descriptive, rather than interactive and dynamic. For this reason, we suggest that, had Marr been writing today, he might well have gone even farther in his analysis, including the emergence of structure-in particular, explicit structure at the conceptual level-from lower levels, and the effect of explicit emergent structures on the level (or levels) that gave rise to them. The message is that today's cognitive scientists need not only to understand how emergent structures-in particular, explicit emergent structures at the cognitive level-develop but also to understand how they feed back on the sub-structures from which they emerged.

  6. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  7. Computationally modeling interpersonal trust.

    Science.gov (United States)

    Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David

    2013-01-01

    We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  8. The application of data mining and cloud computing techniques in data-driven models for structural health monitoring

    Science.gov (United States)

    Khazaeli, S.; Ravandi, A. G.; Banerji, S.; Bagchi, A.

    2016-04-01

    Recently, data-driven models for Structural Health Monitoring (SHM) have been of great interest among many researchers. In data-driven models, the sensed data are processed to determine the structural performance and evaluate the damages of an instrumented structure without necessitating the mathematical modeling of the structure. A framework of data-driven models for online assessment of the condition of a structure has been developed here. The developed framework is intended for automated evaluation of the monitoring data and structural performance by the Internet technology and resources. The main challenges in developing such framework include: (a) utilizing the sensor measurements to estimate and localize the induced damage in a structure by means of signal processing and data mining techniques, and (b) optimizing the computing and storage resources with the aid of cloud services. The main focus in this paper is to demonstrate the efficiency of the proposed framework for real-time damage detection of a multi-story shear-building structure in two damage scenarios (change in mass and stiffness) in various locations. Several features are extracted from the sensed data by signal processing techniques and statistical methods. Machine learning algorithms are deployed to select damage-sensitive features as well as classifying the data to trace the anomaly in the response of the structure. Here, the cloud computing resources from Amazon Web Services (AWS) have been used to implement the proposed framework.

  9. Development and Modelling of High-Efficiency Computing Structure for Digital Signal Processing

    CERN Document Server

    Sharma, Annapurna; Lee, Hoon Jae

    2011-01-01

    The paper is devoted to problem of spline approximation. A new method of nodes location for curves and surfaces computer construction by means of B-splines and results of simulink-modeling is presented. The advantages of this paper is that we comprise the basic spline with classical polynomials both on accuracy, as well as degree of paralleling calculations are also shown.

  10. Infinite possibilities: Computational structures technology

    Science.gov (United States)

    Beam, Sherilee F.

    1994-01-01

    Computational Fluid Dynamics (or CFD) methods are very familiar to the research community. Even the general public has had some exposure to CFD images, primarily through the news media. However, very little attention has been paid to CST--Computational Structures Technology. Yet, no important design can be completed without it. During the first half of this century, researchers only dreamed of designing and building structures on a computer. Today their dreams have become practical realities as computational methods are used in all phases of design, fabrication and testing of engineering systems. Increasingly complex structures can now be built in even shorter periods of time. Over the past four decades, computer technology has been developing, and early finite element methods have grown from small in-house programs to numerous commercial software programs. When coupled with advanced computing systems, they help engineers make dramatic leaps in designing and testing concepts. The goals of CST include: predicting how a structure will behave under actual operating conditions; designing and complementing other experiments conducted on a structure; investigating microstructural damage or chaotic, unpredictable behavior; helping material developers in improving material systems; and being a useful tool in design systems optimization and sensitivity techniques. Applying CST to a structure problem requires five steps: (1) observe the specific problem; (2) develop a computational model for numerical simulation; (3) develop and assemble software and hardware for running the codes; (4) post-process and interpret the results; and (5) use the model to analyze and design the actual structure. Researchers in both industry and academia continue to make significant contributions to advance this technology with improvements in software, collaborative computing environments and supercomputing systems. As these environments and systems evolve, computational structures technology will

  11. Computation of External Quality Factors for RF Structures by Means of Model Order Reduction and a Perturbation Approach

    CERN Document Server

    Flisgen, Thomas; van Rienen, Ursula

    2016-01-01

    External quality factors are significant quantities to describe losses via waveguide ports in radio frequency resonators. The current contribution presents a novel approach to determine external quality factors by means of a two-step procedure: First, a state-space model for the lossless radio frequency structure is generated and its model order is reduced. Subsequently, a perturbation method is applied on the reduced model so that external losses are accounted for. The advantage of this approach results from the fact that the challenges in dealing with lossy systems are shifted to the reduced order model. This significantly saves computational costs. The present paper provides a short overview on existing methods to compute external quality factors. Then, the novel approach is introduced and validated in terms of accuracy and computational time by means of commercial software.

  12. Computation of optimal unstable structures for a numerical weather prediction model

    Science.gov (United States)

    Buizza, R.; Tribbia, J.; Molteni, F.; Palmer, T.

    1993-10-01

    Numerical experiments have been performed to compute the fastest growing perturbations in a finite time interval for a complex numerical weather prediction model. The models used are the tangent forward and adjoint versions of the adiabatic primitive-equation model of the Integrated Forecasting System developed at the European Centre for Medium-Range Weather Forecasts and Météo France. These have been run with a horizontal truncation T21, with 19 vertical levels. The fastest growing perturbations are the singular vectors of the propagator of the forward tangent model with the largest singular values. An iterative Lanczos algorithm has been used for the numerical computation of the perturbations. Sensitivity of the calculations to different time intervals and to the norm used in the definition of the adjoint model have been analysed. The impact of normal mode initialization has also been studied. Two classes of fastest growing perturbations have been found; one is characterized by a maximum amplitude in the middle troposphere, while the other is confined to model layers close to the surface. It is shown that the latter is damped by the boundary layer physics in the full model. The linear evolution of the perturbations has been compared to the non-linear evolution when the perturbations are superimposed on a basic state in the T63, 19-level version of the ECMWF model.

  13. Infinite possibilities: Computational structures technology

    Science.gov (United States)

    Beam, Sherilee F.

    1994-12-01

    Computational Fluid Dynamics (or CFD) methods are very familiar to the research community. Even the general public has had some exposure to CFD images, primarily through the news media. However, very little attention has been paid to CST--Computational Structures Technology. Yet, no important design can be completed without it. During the first half of this century, researchers only dreamed of designing and building structures on a computer. Today their dreams have become practical realities as computational methods are used in all phases of design, fabrication and testing of engineering systems. Increasingly complex structures can now be built in even shorter periods of time. Over the past four decades, computer technology has been developing, and early finite element methods have grown from small in-house programs to numerous commercial software programs. When coupled with advanced computing systems, they help engineers make dramatic leaps in designing and testing concepts. The goals of CST include: predicting how a structure will behave under actual operating conditions; designing and complementing other experiments conducted on a structure; investigating microstructural damage or chaotic, unpredictable behavior; helping material developers in improving material systems; and being a useful tool in design systems optimization and sensitivity techniques. Applying CST to a structure problem requires five steps: (1) observe the specific problem; (2) develop a computational model for numerical simulation; (3) develop and assemble software and hardware for running the codes; (4) post-process and interpret the results; and (5) use the model to analyze and design the actual structure. Researchers in both industry and academia continue to make significant contributions to advance this technology with improvements in software, collaborative computing environments and supercomputing systems. As these environments and systems evolve, computational structures technology will

  14. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  15. Potts model based on a Markov process computation solves the community structure problem effectively.

    Science.gov (United States)

    Li, Hui-Jia; Wang, Yong; Wu, Ling-Yun; Zhang, Junhua; Zhang, Xiang-Sun

    2012-07-01

    The Potts model is a powerful tool to uncover community structure in complex networks. Here, we propose a framework to reveal the optimal number of communities and stability of network structure by quantitatively analyzing the dynamics of the Potts model. Specifically we model the community structure detection Potts procedure by a Markov process, which has a clear mathematical explanation. Then we show that the local uniform behavior of spin values across multiple timescales in the representation of the Markov variables could naturally reveal the network's hierarchical community structure. In addition, critical topological information regarding multivariate spin configuration could also be inferred from the spectral signatures of the Markov process. Finally an algorithm is developed to determine fuzzy communities based on the optimal number of communities and the stability across multiple timescales. The effectiveness and efficiency of our algorithm are theoretically analyzed as well as experimentally validated.

  16. Potts model based on a Markov process computation solves the community structure problem effectively

    CERN Document Server

    Li, Hui-Jia; Wu, Ling-Yun; Zhang, Junhua; Zhang, Xiang-Sun

    2015-01-01

    Potts model is a powerful tool to uncover community structure in complex networks. Here, we propose a new framework to reveal the optimal number of communities and stability of network structure by quantitatively analyzing the dynamics of Potts model. Specifically we model the community structure detection Potts procedure by a Markov process, which has a clear mathematical explanation. Then we show that the local uniform behavior of spin values across multiple timescales in the representation of the Markov variables could naturally reveal the network's hierarchical community structure. In addition, critical topological information regarding to multivariate spin configuration could also be inferred from the spectral signatures of the Markov process. Finally an algorithm is developed to determine fuzzy communities based on the optimal number of communities and the stability across multiple timescales. The effectiveness and efficiency of our algorithm are theoretically analyzed as well as experimentally validate...

  17. Direct methods for limit and shakedown analysis of structures advanced computational algorithms and material modelling

    CERN Document Server

    Pisano, Aurora; Weichert, Dieter

    2015-01-01

    Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that “Direct Methods” and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications.   Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in...

  18. Structural models of randomly packed Tobermorite-like spherical particles: A simple computational approach

    Directory of Open Access Journals (Sweden)

    González-Teresa, R.

    2010-06-01

    Full Text Available In this work, and in order to bring together the atomistic and colloidal viewpoints, we will present a Monte Carlo computational scheme which reproduces the colloidal packing of nano-spherical crystalline tobermorite-like particles. Different Low Density (LD CS- H and High Density (HD C-S-H structures will be developed just by varying the computational packing parameters. Finally, the structures resulting from our computational experiments will be analyzed in terms of their densities, surface areas and their mechanical properties.

    En este trabajo y con el objetivo de conjugar el punto de vista atomístico y coloidal, presentamos un método computacional Monte Carlo que reproduce el empaquetamiento coloidal de nano-partículas esféricas cristalinas de tipo Tobermorita. Variando los parámetros computacionales de empaquetamiento diferentes estructuras tipo Low Density (LD C-S-H y High Density (HD C-S-H han sido creadas. Posteriormente, las estructuras resultantes de nuestros experimentos computacionales han sido analizadas en términos de sus densidades, áreas específicas y propiedades mecánicas.

  19. Numerical modeling of inelastic structures at loading of steady state rolling. Thermo-mechanical asphalt pavement computation

    Science.gov (United States)

    Wollny, Ines; Hartung, Felix; Kaliske, Michael

    2016-05-01

    In order to gain a deeper knowledge of the interactions in the coupled tire-pavement-system, e.g. for the future design of durable pavement structures, the paper presents recent results of research in the field of theoretical-numerical asphalt pavement modeling at material and structural level, whereby the focus is on a realistic and numerically efficient computation of pavements under rolling tire load by using the finite element method based on an Arbitrary Lagrangian Eulerian (ALE) formulation. Inelastic material descriptions are included into the ALE frame efficiently by a recently developed unsplit history update procedure. New is also the implementation of a viscoelastic cohesive zone model into the ALE pavement formulation to describe the interaction of the single pavement layers. The viscoelastic cohesive zone model is further extended to account for the normal pressure dependent shear behavior of the bonding layer. Another novelty is that thermo-mechanical effects are taken into account by a coupling of the mechanical ALE pavement computation to a transient thermal computation of the pavement cross-section to obtain the varying temperature distributions of the pavement due to climatic impact. Then, each ALE pavement simulation considers the temperature dependent asphalt material model that includes elastic, viscous and plastic behavior at finite strains and the temperature dependent viscoelastic cohesive zone formulation. The temperature dependent material parameters of the asphalt layers and the interfacial layers are fitted to experimental data. Results of coupled tire-pavement computations are presented to demonstrate potential fields of application.

  20. Computational protein structure modeling and analysis of UV-B stress protein in Synechocystis PCC 6803.

    Science.gov (United States)

    Rahman, Md Akhlaqur; Chaturvedi, Navaneet; Sinha, Sukrat; Pandey, Paras Nath; Gupta, Dwijendra Kumar; Sundaram, Shanthy; Tripathi, Ashutosh

    2013-01-01

    This study focuses on Ultra Violet stress (UVS) gene product which is a UV stress induced protein from cyanobacteria, Synechocystis PCC 6803. Three dimensional structural modeling of target UVS protein was carried out by homology modeling method. 3F2I pdb from Nostoc sp. PCC 7120 was selected as a suitable template protein structure. Ultimately, the detection of active binding regions was carried out for characterization of functional sites in modeled UV-B stress protein. The top five probable ligand binding sites were predicted and the common binding residues between target and template protein was analyzed. It has been validated for the first time that modeled UVS protein structure from Synechocystis PCC 6803 was structurally and functionally similar to well characterized UVS protein of another cyanobacterial species, Nostoc sp PCC 7120 because of having same structural motif and fold with similar protein topology and function. Investigations revealed that UVS protein from Synechocystis sp. might play significant role during ultraviolet resistance. Thus, it could be a potential biological source for remediation for UV induced stress.

  1. Computational Material Modeling of Hydrated Cement Paste Calcium Silicate Hydrate (C-S-H) Chemistry Structure - Influence of Magnesium Exchange on Mechanical Stiffness: C-S-H Jennite

    Science.gov (United States)

    2015-04-27

    material chemistry structure are studied following a molecular dynamics (MD) computational modeling methodology. Calcium ions are replaced with... chemistry structure. Conference Name: 1st Pan-American Conference on Computational Mechanics Conference Date: April 27, 2015 1st Pan-American Congress on...MODELING OF C-S-H Material chemistry level modeling following the principles and techniques commonly grouped under Computational Material Science is

  2. Computing arbitrage-free yields in multi-factor Gaussian shadow-rate term structure models

    OpenAIRE

    Marcel A. Priebsch

    2013-01-01

    This paper develops a method to approximate arbitrage-free bond yields within a term structure model in which the short rate follows a Gaussian process censored at zero (a "shadow-rate model" as proposed by Black, 1995). The censoring ensures that model-implied yields are constrained to be positive, but it also introduces non-linearity that renders standard bond pricing formulas inapplicable. In particular, yields are not linear functions of the underlying state vector as they are in affine t...

  3. QUBIT DATA STRUCTURES FOR ANALYZING COMPUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vladimir Hahanov

    2014-11-01

    Full Text Available Qubit models and methods for improving the performance of software and hardware for analyzing digital devices through increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions necessary for the implementation of quantum computing when analyzing virtual computers are introduced. The investigation results concerning design and modeling computer systems in a cyberspace based on the use of two-component structure are presented.

  4. Computational modeling of the electromagnetic characteristics of carbon fiber-reinforced polymer composites with different weave structures

    Science.gov (United States)

    Hassan, A. M.; Douglas, J. F.; Garboczi, E. J.

    2014-02-01

    Carbon fiber reinforced polymer composites (CFRPC) are of great interest in the aerospace and automotive industries due to their exceptional mechanical properties. Carbon fibers are typically woven and inter-laced perpendicularly in warps and wefts to form a carbon fabric that can be embedded in a binding matrix. The warps and wefts can be interlaced in different patterns called weaving structures. The primary weaving structures are the plain, twill, and satin weaves, which give different mechanical composite properties. The goal of this work is to computationally investigate the dependence of CFRPC microwave and terahertz electromagnetic characteristics on weave structure. These bands are good candidates for the Nondestructive Evaluation (NDE) of CFRPC since their wavelengths are comparable to the main weave features. 3D full wave electromagnetic simulations of several different weave models have been performed using a finite element (FEM) simulator, which is able to accurately model the complex weave structure. The computational experiments demonstrate that the reflection of electromagnetic waves from CFRPC depend sensitively on weave structure. The reflection spectra calculated in this work can be used to identify the optimal frequencies for the NDE of each weave structure.

  5. A combined computational and structural model of the full-length human prolactin receptor

    DEFF Research Database (Denmark)

    Bugge, Katrine Østergaard; Papaleo, Elena; Haxholm, Gitte Wolfsberg

    2016-01-01

    -angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than...... 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg....

  6. The role of fine-scale anatomical structure in the dynamics of reentry in computational models of the rabbit ventricles.

    Science.gov (United States)

    Bishop, Martin J; Plank, Gernot

    2012-09-15

    Fine-scale anatomical structures in the heart may play an important role in sustaining cardiac arrhythmias. However, the extent of this role and how it may differ between species are not fully understood. In this study we used computational modelling to assess the impact of anatomy upon arrhythmia maintenance in the rabbit ventricles. Specifically, we quantified the dynamics of excitation wavefronts during episodes of simulated tachyarrhythmias and fibrillatory arrhythmias, defined as being respectively characterised by relatively low and high spatio-temporal disorganisation.Two computational models were used: a highly anatomically detailed MR-derived rabbit ventricular model (representing vasculature, endocardial structures) and a simplified equivalent model, constructed from the same MR-data but lacking such fine-scale anatomical features. During tachyarrhythmias, anatomically complex and simplified models showed very similar dynamics; however, during fibrillatory arrhythmias, as activation wavelength decreased, the presence of fine-scale anatomical details appeared to marginally increase disorganisation of wavefronts during arrhythmias in the complex model. Although a small amount of clustering of reentrant rotor centres (filaments) around endocardial structures was witnessed in follow-up analysis (which slightly increased during fibrillation as rotor size decreased), this was significantly less than previously reported in large animals. Importantly, no anchoring of reentrant rotors was visibly identifiable in arrhythmia movies. These differences between tachy- and fibrillatory arrhythmias suggest that the relative size of reentrant rotors with respect to anatomical obstacles governs the influence of fine-scale anatomy in the maintenance of ventricular arrhythmias in the rabbit. In conclusion, our simulations suggest that fine-scale anatomical features play little apparent role in the maintenance of tachyarrhythmias in the rabbit ventricles and, contrary to

  7. Advances in the Development and Application of Computational Methodologies for Structural Modeling of G-Protein Coupled Receptors

    Science.gov (United States)

    Mobarec, Juan Carlos

    2009-01-01

    Background Despite the large amount of experimental data accumulated in the past decade on G-protein coupled receptor (GPCR) structure and function, understanding of the molecular mechanisms underlying GPCR signaling is still far from being complete, thus impairing the design of effective and selective pharmaceuticals. Objective Understanding of GPCR function has been challenged even further by more recent experimental evidence that several of these receptors are organized in the cell membrane as homo- or hetero-oligomers, and that they may exhibit unique pharmacological properties. Given the complexity of these new signaling systems, researcher’s efforts are turning increasingly to molecular modeling, bioinformatics and computational simulations for mechanistic insights of GPCR functional plasticity. Methods We review here current advances in the development and application of computational approaches to improve prediction of GPCR structure and dynamics, thus enhancing current understanding of GPCR signaling. Results/Conclusions Models resulting from use of these computational approaches further supported by experiments are expected to help elucidate the complex allosterism that propagates through GPCR complexes, ultimately aiming at successful structure-based rational drug design. PMID:19672320

  8. Modeling Trusted Computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Shuyi; WEN Yingyou; ZHAO Hong

    2006-01-01

    In this paper, a formal approach based on predicate logic is proposed for representing and reasoning of trusted computing models. Predicates are defined to represent the characteristics of the objects and the relationship among these objects in a trusted system according to trusted computing specifications. Inference rules of trusted relation are given too. With the semantics proposed, some trusted computing models are formalized and verified, which shows that Predicate calculus logic provides a general and effective method for modeling and reasoning trusted computing systems.

  9. Computational structural mechanics for engine structures

    Science.gov (United States)

    Chamis, C. C.

    1989-01-01

    The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.

  10. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  11. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  12. Cardiac tissue structure. Electric field interactions in polarizing the heart: 3D computer models and applications

    Science.gov (United States)

    Entcheva, Emilia

    1998-11-01

    The goal of this research is to investigate the interactions between the cardiac tissue structure and applied electric fields in producing complex polarization patterns. It is hypothesized that the response of the heart in the conditions of strong electric shocks, as those applied in defibrillation, is dominated by mechanisms involving the cardiac muscle structure perceived as a continuum. Analysis is carried out in three-dimensional models of the heart with detailed fiber architecture. Shock-induced transmembrane potentials are calculated using the bidomain model in its finite element implementation. The major new findings of this study can be summarized as follows: (1) The mechanisms of polarization due to cardiac fiber curvature and fiber rotation are elucidated in three-dimensional ellipsoidal hearts of variable geometry; (2) Results are presented showing that the axis of stimulation and the polarization axis on a whole heart level might differ significantly due to geometric and anisotropic factors; (3) Virtual electrode patterns are demonstrated numerically inside the ventricular wall in internal defibrillation conditions. The role of the tissue-bath interface in shaping the shock-induced polarization is revealed; (4) The generation of 3D phase singularity scrolls by shock-induced intramural virtual electrode patterns is proposed as evidence for a possible new mechanism for the failure to defibrillate. The results of this study emphasize the role of unequal anisotropy in the intra- and extracellular domains, as well as the salient fiber architecture characteristics, such as curvature and transmural rotation, in polarizing the myocardium. Experimental support of the above findings was actively sought and found in recent optical mapping studies using voltage-sensitive dyes. If validated in vivo, these findings would significantly enrich the prevailing concepts about the mechanisms of stimulation and defibrillation of the heart.

  13. Exhaled Aerosol Pattern Discloses Lung Structural Abnormality: A Sensitivity Study Using Computational Modeling and Fractal Analysis

    Science.gov (United States)

    Xi, Jinxiang; Si, Xiuhua A.; Kim, JongWon; Mckee, Edward; Lin, En-Bing

    2014-01-01

    Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. PMID:25105680

  14. Exhaled aerosol pattern discloses lung structural abnormality: a sensitivity study using computational modeling and fractal analysis.

    Directory of Open Access Journals (Sweden)

    Jinxiang Xi

    Full Text Available Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases.In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns.Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma.Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities.

  15. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  16. Computer simulation of model cohesive powders: Plastic consolidation, structural changes, and elasticity under isotropic loads

    Science.gov (United States)

    Gilabert, F. A.; Roux, J.-N.; Castellanos, A.

    2008-09-01

    The quasistatic behavior of a simple two-dimensional model of a cohesive powder under isotropic loads is investigated by discrete element simulations. We ignore contact plasticity and focus on the effect of geometry and collective rearrangements on the material behavior. The loose packing states, as assembled and characterized in a previous numerical study [Gilabert, Roux, and Castellanos, Phys. Rev. E 75, 011303 (2007)], are observed, under growing confining pressure P , to undergo important structural changes, while solid fraction Φ irreversibly increases (typically, from 0.4-0.5 to 0.75-0.8). The system state goes through three stages, with different forms of the plastic consolidation curve, i.e., Φ as a function of the growing reduced pressure P*=Pa/F0 , defined with adhesion force F0 and grain diameter a . In the low-confinement regime (I), the system undergoes negligible plastic compaction, and its structure is influenced by the assembling process. In regime II the material state is independent of initial conditions, and the void ratio varies linearly with lnP [i.e., Δ(1/Φ)=λΔ(lnP*) ], as described in the engineering literature. Plasticity index λ is reduced in the presence of a small rolling resistance (RR). In the last stage of compaction (III), Φ approaches an asymptotic, maximum solid fraction Φmax , as a power law Φmax-Φ∝(P*)-α , with α≃1 , and properties of cohesionless granular packs are gradually retrieved. Under consolidation, while the range ξ of fractal density correlations decreases, force patterns reorganize from self-balanced clusters to force chains, with correlative evolutions of force distributions, and elastic moduli increase by a large amount. Plastic deformation events correspond to very small changes in the network topology, while the denser regions tend to move like rigid bodies. Elastic properties are dominated by the bending of thin junctions in loose systems. For growing RR those tend to form particle chains, the

  17. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  18. Structure and models of artifactual routine design problems for computational synthesis

    NARCIS (Netherlands)

    Jauregui Becker, J.M.; Tragter, H.; Houten, van F.J.A.M.

    2009-01-01

    Computational synthesis (CS) researches the automatic generation of solutions to design problems. The aim is to shorten design times and present the user with multiple design solutions. However, initializing a new CS process has not received much attention in literature. With this motivation, this p

  19. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies dif

  20. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  1. Spatial Dynamic Structures and Mobility in Computation

    CERN Document Server

    Aman, Bogdan

    2011-01-01

    Membrane computing is a well-established and successful research field which belongs to the more general area of molecular computing. Membrane computing aims at defining parallel and non-deterministic computing models, called membrane systems or P Systems, which abstract from the functioning and structure of the cell. A membrane system consists of a spatial structure, a hierarchy of membranes which do not intersect, with a distinguishable membrane called skin surrounding all of them. A membrane without any other membranes inside is elementary, while a non-elementary membrane is a composite membrane. The membranes define demarcations between regions; for each membrane there is a unique associated region. Since we have a one-to-one correspondence, we sometimes use membrane instead of region, and vice-versa. The space outside the skin membrane is called the environment. In this thesis we define and investigate variants of systems of mobile membranes as models for molecular computing and as modelling paradigms fo...

  2. Computational Complexity in Electronic Structure

    CERN Document Server

    Whitfield, James D; Aspuru-Guzik, Alan

    2012-01-01

    In quantum chemistry, the price paid by all known efficient model chemistries is either the truncation of the Hilbert space or uncontrolled approximations. Theoretical computer science suggests that these restrictions are not mere shortcomings of the algorithm designers and programmers but could stem from the inherent difficulty of simulating quantum systems. Extensions of computer science and information processing exploiting quantum mechanics has led to new ways of understanding the ultimate limitations of computational power. Interestingly, this perspective helps us understand widely used model chemistries in a new light. In this article, the fundamentals of computational complexity will be reviewed and motivated from the vantage point of chemistry. Then recent results from the computational complexity literature regarding common model chemistries including Hartree-Fock and density functional theory are discussed.

  3. Integrating solid-state NMR and computational modeling to investigate the structure and dynamics of membrane-associated ghrelin.

    Directory of Open Access Journals (Sweden)

    Gerrit Vortmeier

    Full Text Available The peptide hormone ghrelin activates the growth hormone secretagogue receptor 1a, also known as the ghrelin receptor. This 28-residue peptide is acylated at Ser3 and is the only peptide hormone in the human body that is lipid-modified by an octanoyl group. Little is known about the structure and dynamics of membrane-associated ghrelin. We carried out solid-state NMR studies of ghrelin in lipid vesicles, followed by computational modeling of the peptide using Rosetta. Isotropic chemical shift data of isotopically labeled ghrelin provide information about the peptide's secondary structure. Spin diffusion experiments indicate that ghrelin binds to membranes via its lipidated Ser3. Further, Phe4, as well as electrostatics involving the peptide's positively charged residues and lipid polar headgroups, contribute to the binding energy. Other than the lipid anchor, ghrelin is highly flexible and mobile at the membrane surface. This observation is supported by our predicted model ensemble, which is in good agreement with experimentally determined chemical shifts. In the final ensemble of models, residues 8-17 form an α-helix, while residues 21-23 and 26-27 often adopt a polyproline II helical conformation. These helices appear to assist the peptide in forming an amphipathic conformation so that it can bind to the membrane.

  4. Three-dimensional Computational Fluid Dynamics Modeling of Two-phase Flow in a Structured Packing Column

    Institute of Scientific and Technical Information of China (English)

    张小斌; 姚蕾; 邱利民; 张学军

    2013-01-01

    Characterizing the complex two-phase hydrodynamics in structured packed columns requires a power-ful modeling tool. The traditional two-dimensional model exhibits limitations when one attempts to model the de-tailed two-phase flow inside the columns. The present paper presents a three-dimensional computational fluid dy-namics (CFD) model to simulate the two-phase flow in a representative unit of the column. The unit consists of an entire corrugation channel and describes well the real liquid flow conditions. The detailed unsteady two-phase 3D CFD calculations on column packed with Flexipak 1Y were implemented within the volume of fluid (VOF) mathe-matical framework. The CFD model was validated by comparing the calculated thickness of liquid film with the available experimental data. Special attention was given to quantitative analysis of the effects of gravity on the hy-drodynamics. Fluctuations in the liquid mass flow rate and the calculated pressure drop loss were found to be quali-tatively in agreement with the experimental observations.

  5. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    Science.gov (United States)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  6. Data structures, computer graphics, and pattern recognition

    CERN Document Server

    Klinger, A; Kunii, T L

    1977-01-01

    Data Structures, Computer Graphics, and Pattern Recognition focuses on the computer graphics and pattern recognition applications of data structures methodology.This book presents design related principles and research aspects of the computer graphics, system design, data management, and pattern recognition tasks. The topics include the data structure design, concise structuring of geometric data for computer aided design, and data structures for pattern recognition algorithms. The survey of data structures for computer graphics systems, application of relational data structures in computer gr

  7. Computational Simulation of Complex Structure Fancy Yarns

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A study is reported for mathematical model and simulation of complex structure fancy yarns. The investigated complex structure fancy yarns have a multithread structure composed of three components -core, effect, and binder yams. In current research the precondition was accepted that the cross-sections of the both two yarns of the effect intermediate product in the complex structure fancy yarn remain the circles shaped, and this shape does not change during manufacturing of the fancy yarn. Mathematical model of complex structure fancy yarn is established based on parameter equation of space helix line and computer simulation is further carried out using the computational mathematical tool Matlab 6.5. Theoretical structure of fancy yarn is compared with an experimental sample. The simulation system would help for further the set ofinformation in designing of new assortment of the complex structure fancy yarns and prediction of visual effects of fancy yarns in end-use fabrics.

  8. Phase-contrast computed tomography for quantification of structural changes in lungs of asthma mouse models of different severity

    Energy Technology Data Exchange (ETDEWEB)

    Dullin, Christian, E-mail: christian.dullin@med.uni-goettingen.de [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Larsson, Emanuel [Elettra-Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, Basovizza (Trieste) 34149 (Italy); University of Trieste, Trieste (Italy); Linkoeping University, SE-581 83 Linkoeping (Sweden); Tromba, Giuliana [Elettra-Sincrotrone Trieste, Strada Statale 14, km 163,5 in AREA Science Park, Basovizza (Trieste) 34149 (Italy); Markus, Andrea M. [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Alves, Frauke [University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); University Medical Center Goettingen, Robert Koch Strasse 40, Goettingen, Lower Saxony 37075 (Germany); Max Planck Institut for Experimental Medicine, Hermann-Rein-Strasse 3, Goettingen, Lower Saxony 37075 (Germany)

    2015-06-17

    Synchrotron inline phase-contrast computed tomography in combination with single-distance phase retrieval enables quantification of morphological alterations in lungs of mice with mild and severe experimental allergic airways disease in comparison with healthy controls. Lung imaging in mouse disease models is crucial for the assessment of the severity of airway disease but remains challenging due to the small size and the high porosity of the organ. Synchrotron inline free-propagation phase-contrast computed tomography (CT) with its intrinsic high soft-tissue contrast provides the necessary sensitivity and spatial resolution to analyse the mouse lung structure in great detail. Here, this technique has been applied in combination with single-distance phase retrieval to quantify alterations of the lung structure in experimental asthma mouse models of different severity. In order to mimic an in vivo situation as close as possible, the lungs were inflated with air at a constant physiological pressure. Entire mice were embedded in agarose gel and imaged using inline free-propagation phase-contrast CT at the SYRMEP beamline (Synchrotron Light Source, ‘Elettra’, Trieste, Italy). The quantification of the obtained phase-contrast CT data sets revealed an increasing lung soft-tissue content in mice correlating with the degree of the severity of experimental allergic airways disease. In this way, it was possible to successfully discriminate between healthy controls and mice with either mild or severe allergic airway disease. It is believed that this approach may have the potential to evaluate the efficacy of novel therapeutic strategies that target airway remodelling processes in asthma.

  9. Probabilistic model, analysis and computer code for take-off and landing related aircraft crashes into a structure

    Energy Technology Data Exchange (ETDEWEB)

    Glaser, R.

    1996-02-06

    A methodology is presented that allows the calculation of the probability that any of a particular collection of structures will be hit by an aircraft in a take-off or landing related accident during a specified window of time with a velocity exceeding a given critical value. A probabilistic model is developed that incorporates the location of each structure relative to airport runways in the vicinity; the size of the structure; the sizes, types, and frequency of use of commercial, military, and general aviation aircraft which take-off and land at these runways; the relative frequency of take-off and landing related accidents by aircraft type; the stochastic properties of off-runway crashes, namely impact location, impact angle, impact velocity, and the heading, deceleration, and skid distance after impact; and the stochastic properties of runway overruns and runoffs, namely the position at which the aircraft exits the runway, its exit velocity, and the heading and deceleration after exiting. Relevant probability distributions are fitted from extensive commercial, military, and general aviation accident report data bases. The computer source code for implementation of the calculation is provided.

  10. Computed Tomography and Magnetic Resonance Imaging for Longitudinal Characterization of Lung Structure Changes in a Yucatan Miniature Pig Silicosis Model.

    Science.gov (United States)

    Hammond, Emily; Newell, John D; Dilger, Samantha K N; Stoyles, Nicholas; Morgan, John; Sieren, Jered P; Thedens, Daniel R; Hoffman, Eric A; Meyerholz, David K; Sieren, Jessica C

    2016-04-01

    Medical imaging is a rapidly advancing field enabling the repeated, noninvasive assessment of physiological structure and function. These beneficial characteristics can supplement studies in swine by mirroring the clinical functions of detection, diagnosis, and monitoring in humans. In addition, swine may serve as a human surrogate, facilitating the development and comparison of new imaging protocols for translation to humans. This study presents methods for pulmonary imaging developed for monitoring pulmonary disease initiation and progression in a pig exposure model with computed tomography and magnetic resonance imaging. In particular, a focus was placed on systematic processes, including positioning, image acquisition, and structured reporting to monitor longitudinal change. The image-based monitoring procedure was applied to 6 Yucatan miniature pigs. A subset of animals (n= 3) were injected with crystalline silica into the apical bronchial tree to induce silicosis. The methodology provided longitudinal monitoring and evidence of progressive lung disease while simultaneously allowing for a cross-modality comparative study highlighting the practical application of medical image data collection in swine. The integration of multimodality imaging with structured reporting allows for cross comparison of modalities, refinement of CT and MRI protocols, and consistently monitors potential areas of interest for guided biopsy and/or necropsy. © The Author(s) 2016.

  11. Computer modelling of the 3-dimensional structures of the cyanobacterial hepatotoxins microcystin-LR and nodularin.

    Science.gov (United States)

    Lanaras, T; Cook, C M; Eriksson, J E; Meriluoto, J A; Hotokka, M

    1991-01-01

    The 3-dimensional structures of two cyanobacterial hepatotoxins microcystin-LR, a cyclic heptapeptide and nodularin, a cyclic pentapeptide, and the novel amino acid ADDA (3-amino-9-methoxy-2,6,8-trimethyl-10-phenyl-4,6-decadienoic acid) were constructed, and optimized using the CHEM-X molecular mechanics program. The peptide rings were planar and of rectangular shape. Optimized ADDA formed a U-shape and a difference in the orientation of ADDA with respect to the peptide ring of the two hepatotoxins was observed.

  12. Computer-Aided Structural Engineering (CASE) Project. User’s Guide: Computer-Aided Structural Modeling (CASM). Version 5.00

    Science.gov (United States)

    1994-04-01

    w transferred to the laeral resistance localions by tributary wee or continuous bean model . For rigid diaphragms, lateral loads we translerred to the...floors aid roof planes, a Flexible Diaphragm dialog window wil appea Didnu.•e Leeuds aewd On- 0 Siml I.. Model 0 Ceiwmus Bean Model a. Select Simple

  13. Slepian modeling as a computational method in random vibration analysis of hysteretic structures

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1999-01-01

    The Slepian model process method combined with envelope level crossing properties provesits usefulness as a tool for fast simulation of approximate plastic displacement responses of a wide classof elasto-plastic oscillators (EPOs) of one or moredegrees of freedom excited by stationary Gaussian...... with the method are reported with reference toearlier papers. Finally the convincing accuracy of the method is illustrated by an example of one degreeof freedom EPOs with hardening or softening plastic behavior....

  14. A structural equation modeling approach for the adoption of cloud computing to enhance the Malaysian healthcare sector.

    Science.gov (United States)

    Ratnam, Kalai Anand; Dominic, P D D; Ramayah, T

    2014-08-01

    The investments and costs of infrastructure, communication, medical-related equipments, and software within the global healthcare ecosystem portray a rather significant increase. The emergence of this proliferation is then expected to grow. As a result, information and cross-system communication became challenging due to the detached independent systems and subsystems which are not connected. The overall model fit expending over a sample size of 320 were tested with structural equation modelling (SEM) using AMOS 20.0 as the modelling tool. SPSS 20.0 is used to analyse the descriptive statistics and dimension reliability. Results of the study show that system utilisation and system impact dimension influences the overall level of services of the healthcare providers. In addition to that, the findings also suggest that systems integration and security plays a pivotal role for IT resources in healthcare organisations. Through this study, a basis for investigation on the need to improvise the Malaysian healthcare ecosystem and the introduction of a cloud computing platform to host the national healthcare information exchange has been successfully established.

  15. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  16. Structural assignment of 6-oxy purine derivatives through computational modeling, synthesis, X-ray diffraction, and spectroscopic analysis.

    Science.gov (United States)

    Zhao, Xinyun; Chen, Xi; Yang, Guang-Fu; Zhan, Chang-Guo

    2010-05-27

    6-Oxy purine derivatives have been considered as potential therapeutic agents in various drug discovery efforts reported in the literature. However, the structural assignment of this important class of compounds has been controversial concerning the specific position of a hydrogen atom in the structure. To theoretically determine the most favorable type of tautomeric form of 6-oxy purine derivatives, we have carried out first-principles electronic structure calculations on the possible tautomeric forms (A, B, and C) and their relative stability of four representative 6-oxy purine derivatives (compounds 1-4). The computational results in both the gas phase and aqueous solution clearly reveal that the most favorable type of tautomeric form of these compounds is A, in which a hydrogen atom bonds with the N1 atom on the purine ring. To examine the computational results, one of the 6-oxy purine derivatives (i.e., compound 4) has been synthesized and its structure has been characterized by X-ray diffraction and spectroscopic analysis. All of the obtained computational and experimental data are consistent with the conclusion that the 6-oxy purine derivative exists in tautomer A. The conclusive structural assignment reported here is expected to be valuable for future computational studies on 6-oxy purine derivative binding with proteins and for computational drug design involving this type of compounds.

  17. Understanding Student Computational Thinking with Computational Modeling

    CERN Document Server

    Aiken, John M; Douglas, Scott S; Burk, John B; Scanlon, Erin M; Thoms, Brian D; Schatz, Michael F

    2012-01-01

    Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". Students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than obs...

  18. Computationally modeling interpersonal trust

    OpenAIRE

    Jin Joo eLee; Brad eKnox; Jolie eBaumann; Cynthia eBreazeal; David eDeSteno

    2013-01-01

    We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our pr...

  19. A Computational Model to Assess Poststenting Wall Stresses Dependence on Plaque Structure and Stenosis Severity in Coronary Artery

    Directory of Open Access Journals (Sweden)

    Zuned Hajiali

    2014-01-01

    Full Text Available The current study presents computational models to investigate the poststenting hemodynamic stresses and internal stresses over/within the diseased walls of coronary arteries which are in different states of atherosclerotic plaque. The finite element method is applied to build the axisymmetric models which include the plaque, arterial wall, and stent struts. The study takes into account the mechanical effects of the opening pressure and its association with the plaque severity and the morphology. The wall shear stresses and the von Mises stresses within the stented coronary arteries show their strong dependence on the plaque structure, particularly the fibrous cap thickness. Higher stresses occur in severely stenosed coronaries with a thinner fibrous cap. Large stress concentrations around the stent struts cause injury or damage to the vessel wall which is linked to the mechanism of restenosis. The in-stent restenosis rate is also highly dependent on the opening pressure, to the extent that stenosed artery is expanded, and geometry of the stent struts. The present study demonstrates, for the first time, that the restenosis is to be viewed as a consequence of biomechanical design of a stent repeating unit, the opening pressure, and the severity and morphology of the plaque.

  20. Computational modeling of membrane proteins.

    Science.gov (United States)

    Koehler Leman, Julia; Ulmschneider, Martin B; Gray, Jeffrey J

    2015-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefited from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade.

  1. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  2. The Antares computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)

    2013-10-11

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  3. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  4. Edible oil structures at low and intermediate concentrations. I. Modeling, computer simulation, and predictions for X ray scattering

    Science.gov (United States)

    Pink, David A.; Quinn, Bonnie; Peyronel, Fernanda; Marangoni, Alejandro G.

    2013-12-01

    Triacylglycerols (TAGs) are biologically important molecules which form the recently discovered highly anisotropic crystalline nanoplatelets (CNPs) and, ultimately, the large-scale fat crystal networks in edible oils. Identifying the hierarchies of these networks and how they spontaneously self-assemble is important to understanding their functionality and oil binding capacity. We have modelled CNPs and studied how they aggregate under the assumption that all CNPs are present before aggregation begins and that their solubility in the liquid oil is very low. We represented CNPs as rigid planar arrays of spheres with diameter ≈50 nm and defined the interaction between spheres in terms of a Hamaker coefficient, A, and a binding energy, VB. We studied three cases: weak binding, |VB|/kBT ≪ 1, physically realistic binding, VB = Vd(R, Δ), so that |VB|/kBT ≈ 1, and Strong binding with |VB|/kBT ≫ 1. We divided the concentration of CNPs, ϕ, with 0≤ϕ= 10-2 (solid fat content) ≤1, into two regions: Low and intermediate concentrations with 0<ϕ<0.25 and high concentrations with 0.25 < ϕ and considered only the first case. We employed Monte Carlo computer simulation to model CNP aggregation and analyzed them using static structure functions, S(q). We found that strong binding cases formed aggregates with fractal dimension, D, 1.7≤D ≤1.8, in accord with diffusion limited cluster-cluster aggregation (DLCA) and weak binding formed aggregates with D =3, indicating a random distribution of CNPs. We found that models with physically realistic intermediate binding energies formed linear multilayer stacks of CNPs (TAGwoods) with fractal dimension D =1 for ϕ =0.06,0.13, and 0.22. TAGwood lengths were greater at lower ϕ than at higher ϕ, where some of the aggregates appeared as thick CNPs. We increased the spatial scale and modelled the TAGwoods as rigid linear arrays of spheres of diameter ≈500 nm, interacting via the attractive van der Waals interaction. We

  5. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  6. Military engine computational structures technology

    Science.gov (United States)

    Thomson, Daniel E.

    1992-01-01

    Integrated High Performance Turbine Engine Technology Initiative (IHPTET) goals require a strong analytical base. Effective analysis of composite materials is critical to life analysis and structural optimization. Accurate life prediction for all material systems is critical. User friendly systems are also desirable. Post processing of results is very important. The IHPTET goal is to double turbine engine propulsion capability by the year 2003. Fifty percent of the goal will come from advanced materials and structures, the other 50 percent will come from increasing performance. Computer programs are listed.

  7. Collective network for computer structures

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Chen, Dong (Croton On Hudson, NY); Gara, Alan (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Heidelberger, Philip (Cortlandt Manor, NY); Hoenicke, Dirk (Ossining, NY); Takken, Todd E. (Brewster, NY); Steinmacher-Burow, Burkhard D. (Wernau, DE); Vranas, Pavlos M. (Bedford Hills, NY)

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  8. Plasticity modeling & computation

    CERN Document Server

    Borja, Ronaldo I

    2013-01-01

    There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.

  9. Computational molecular modeling and structural rationalization for the design of a drug-loaded PLLA/PVA biopolymeric membrane

    Energy Technology Data Exchange (ETDEWEB)

    Sibeko, B; Pillay, V; Choonara, Y E; Khan, R A; Danckwerts, M P [Department of Pharmacy and Pharmacology, University of the Witwatersrand, 7 York Road, Parktown, 2193 Johannesburg (South Africa); Modi, G [Division of Neurosciences, Department of Neurology, University of the Witwatersrand, Johannesburg (South Africa); Iyuke, S E [School of Chemical and Metallurgical Engineering, University of the Witwatersrand, Johannesburg (South Africa); Naidoo, D, E-mail: viness.pillay@wits.ac.z [Division of Neurosciences, Department of Neurosurgery, University of the Witwatersrand, Johannesburg (South Africa)

    2009-02-15

    The purpose of this study was to design, characterize and assess the influence of triethanolamine (TEA) on the physicomechanical properties and release of methotrexate (MTX) from a composite biopolymeric membrane. Conjugated poly(L-lactic acid) (PLLA) and poly(vinyl alcohol) (PVA) membranes were prepared by immersion precipitation with and without the addition of TEA. Drug entrapment efficiency (DEE) and release studies were performed in phosphate buffered saline (pH 7.4, 37 deg. C). Scanning electron microscopy elucidated the membrane surface morphology. Computational and structural molecular modeling rationalized the potential mechanisms of membrane formation and MTX release. Bi-axial force-distance (F-D) extensibility profiles were generated to determine the membrane toughness, elasticity and fracturability. Membranes were significantly toughened by the addition of TEA as a discrete rubbery phase within the co-polymer matrix. MTX-TEA-PLLA-PVA membranes were tougher (F = 89 N) and more extensible (D = 8.79 mm) compared to MTX-PLLA-PVA (F = 35 N, D = 3.7 mm) membranes as a greater force of extension and fracture distance were required (N = 10). DEE values were relatively high (>80%, N = 5) for both formulations. Photomicrographs revealed distinct crystalline layered morphologies with macro-pores. MTX was released by tri-phasic kinetics with a lower fractional release of MTX from MTX-TEA-PLLA-PVA membranes compared to MTX-PLLA-PVA. TEA provided a synergistic approach to improving the membrane physicomechanical properties and modulation of MTX release. The composite biopolymeric membrane may therefore be suitable for the novel delivery of MTX in the treatment of chronic primary central nervous system lymphoma.

  10. Computational modelling of SCC flow

    DEFF Research Database (Denmark)

    Geiker, Mette Rica; Thrane, Lars Nyholm; Szabo, Peter

    2005-01-01

    To benefit from the full potential of self-compacting concrete (SCC) prediction tools are needed for the form filling of SCC. Such tools should take into account the properties of the concrete, the shape and size of the structural element, the position of rebars, and the casting technique. Exampl...... of computational models for the time dependent flow behavior are given, and advantages and disadvantages of discrete particle and single fluid models are briefly described.......To benefit from the full potential of self-compacting concrete (SCC) prediction tools are needed for the form filling of SCC. Such tools should take into account the properties of the concrete, the shape and size of the structural element, the position of rebars, and the casting technique. Examples...

  11. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  12. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  13. A universal computational model for predicting antigenic variants of influenza A virus based on conserved antigenic structures

    Science.gov (United States)

    Peng, Yousong; Wang, Dayan; Wang, Jianhong; Li, Kenli; Tan, Zhongyang; Shu, Yuelong; Jiang, Taijiao

    2017-01-01

    Rapid determination of the antigenicity of influenza A virus could help identify the antigenic variants in time. Currently, there is a lack of computational models for predicting antigenic variants of some common hemagglutinin (HA) subtypes of influenza A viruses. By means of sequence analysis, we demonstrate here that multiple HA subtypes of influenza A virus undergo similar mutation patterns of HA1 protein (the immunogenic part of HA). Further analysis on the antigenic variation of influenza A virus H1N1, H3N2 and H5N1 showed that the amino acid residues’ contribution to antigenic variation highly differed in these subtypes, while the regional bands, defined based on their distance to the top of HA1, played conserved roles in antigenic variation of these subtypes. Moreover, the computational models for predicting antigenic variants based on regional bands performed much better in the testing HA subtype than those did based on amino acid residues. Therefore, a universal computational model, named PREDAV-FluA, was built based on the regional bands to predict the antigenic variants for all HA subtypes of influenza A viruses. The model achieved an accuracy of 0.77 when tested with avian influenza H9N2 viruses. It may help for rapid identification of antigenic variants in influenza surveillance. PMID:28165025

  14. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  15. Computational modeling of elastic properties of carbon nanotube/polymer composites with interphase regions. Part I: Micro-structural characterization and geometric modeling

    KAUST Repository

    Han, Fei

    2014-01-01

    A computational strategy to predict the elastic properties of carbon nanotube-reinforced polymer composites is proposed in this two-part paper. In Part I, the micro-structural characteristics of these nano-composites are discerned. These characteristics include networks/agglomerations of carbon nanotubes and thick polymer interphase regions between the nanotubes and the surrounding matrix. An algorithm is presented to construct three-dimensional geometric models with large amounts of randomly dispersed and aggregated nanotubes. The effects of the distribution of the nanotubes and the thickness of the interphase regions on the concentration of the interphase regions are demonstrated with numerical results. © 2013 Elsevier B.V. All rights reserved.

  16. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    Science.gov (United States)

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  17. RNAHelix: computational modeling of nucleic acid structures with Watson-Crick and non-canonical base pairs

    Science.gov (United States)

    Bhattacharyya, Dhananjay; Halder, Sukanya; Basu, Sankar; Mukherjee, Debasish; Kumar, Prasun; Bansal, Manju

    2017-02-01

    Comprehensive analyses of structural features of non-canonical base pairs within a nucleic acid double helix are limited by the availability of a small number of three dimensional structures. Therefore, a procedure for model building of double helices containing any given nucleotide sequence and base pairing information, either canonical or non-canonical, is seriously needed. Here we describe a program RNAHelix, which is an updated version of our widely used software, NUCGEN. The program can regenerate duplexes using the dinucleotide step and base pair orientation parameters for a given double helical DNA or RNA sequence with defined Watson-Crick or non-Watson-Crick base pairs. The original structure and the corresponding regenerated structure of double helices were found to be very close, as indicated by the small RMSD values between positions of the corresponding atoms. Structures of several usual and unusual double helices have been regenerated and compared with their original structures in terms of base pair RMSD, torsion angles and electrostatic potentials and very high agreements have been noted. RNAHelix can also be used to generate a structure with a sequence completely different from an experimentally determined one or to introduce single to multiple mutation, but with the same set of parameters and hence can also be an important tool in homology modeling and study of mutation induced structural changes.

  18. Computational modeling of concrete flow

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic

    2007-01-01

    This paper provides a general overview of the present status regarding computational modeling of the flow of fresh concrete. The computational modeling techniques that can be found in the literature may be divided into three main families: single fluid simulations, numerical modeling of discrete...

  19. A Novel Forensic Computing Model

    Institute of Scientific and Technical Information of China (English)

    XU Yunfeng; LU Yansheng

    2006-01-01

    According to the requirement of computer forensic and network forensic, a novel forensic computing model is presented, which exploits XML/OEM/RM data model, Data fusion technology, forensic knowledgebase, inference mechanism of expert system and evidence mining engine. This model takes advantage of flexility and openness, so it can be widely used in mining evidence.

  20. Electronic structure of nickel(II) and zinc(II) borohydrides from spectroscopic measurements and computational modeling.

    Science.gov (United States)

    Desrochers, Patrick J; Sutton, Christopher A; Abrams, Micah L; Ye, Shengfa; Neese, Frank; Telser, Joshua; Ozarowski, Andrew; Krzystek, J

    2012-03-05

    The previously reported Ni(II) complex, Tp*Ni(κ(3)-BH(4)) (Tp* = hydrotris(3,5-dimethylpyrazolyl)borate anion), which has an S = 1 spin ground state, was studied by high-frequency and -field electron paramagnetic resonance (HFEPR) spectroscopy as a solid powder at low temperature, by UV-vis-NIR spectroscopy in the solid state and in solution at room temperature, and by paramagnetic (11)B NMR. HFEPR provided its spin Hamiltonian parameters: D = 1.91(1) cm(-1), E = 0.285(8) cm(-1), g = [2.170(4), 2.161(3), 2.133(3)]. Similar, but not identical parameters were obtained for its borodeuteride analogue. The previously unreported complex, Tp*Zn(κ(2)-BH(4)), was prepared, and IR and NMR spectroscopy allowed its comparison with analogous closed shell borohydride complexes. Ligand-field theory was used to model the electronic transitions in the Ni(II) complex successfully, although it was less successful at reproducing the zero-field splitting (zfs) parameters. Advanced computational methods, both density functional theory (DFT) and ab initio wave function based approaches, were applied to these Tp*MBH(4) complexes to better understand the interaction between these metals and borohydride ion. DFT successfully reproduced bonding geometries and vibrational behavior of the complexes, although it was less successful for the spin Hamiltonian parameters of the open shell Ni(II) complex. These were instead best described using ab initio methods. The origin of the zfs in Tp*Ni(κ(3)-BH(4)) is described and shows that the relatively small magnitude of D results from several spin-orbit coupling (SOC) interactions of large magnitude, but with opposite sign. Spin-spin coupling (SSC) is also shown to be significant, a point that is not always appreciated in transition metal complexes. Overall, a picture of bonding and electronic structure in open and closed shell late transition metal borohydrides is provided, which has implications for the use of these complexes in catalysis and

  1. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...... are generated through the template in ICAS-MoT and translated into a model object. Once in ICAS-MoT, the model is numerical analyzed, solved and identified. A computer-aided modeling framework integrating systematic model derivation and development tools has been developed. It includes features for model...

  2. Challenges in Integrated Computational Structure - Material Modeling of High Strain-Rate Deformation and Failure in Heterogeneous Materials

    Science.gov (United States)

    2014-10-09

    author(s) and should not contrued as an official Department of the Army position, policy or decision, unless so designated by other documentation . 9...Structure Heterogeneous Material Models REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...Bronkhorst of LANL . This was followed by a 30 min. panel discussion. (iv) Plenary session # 2 on Probabilistic Modeling & Uncertainty

  3. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    Science.gov (United States)

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-07

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.

  4. Computational modeling in biomechanics

    CERN Document Server

    Mofrad, Mohammad

    2010-01-01

    This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..

  5. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  6. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models...

  7. Computational Studies of Structures and Dynamics of 1, 3-Dimethylimidazolim Salt Liquid and their Interfaces Using Polarizable Potential Models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Tsun-Mei; Dang, Liem X.

    2009-03-12

    The structures, thermodynamics, dynamical properties of bulk and air/liquid interfaces of three ionic liquids, 1,3-dimethylimidazolium [dmim]+, Cl-, Br-, and I- are studied using molecular dynamics techniques. In bulk melts, the radial distribution functions reveal a significant long-range structural correlation in these ionic liquids. From the angular distribution analysis, the imidazolium rings are found to lie parallel to each other at short distances, consistent with the structures observed in the crystal state. The single-ion dynamics are studied via mean-square-displacements, velocity and orientational correlation functions. The diffusion coefficients and reorientational times are found to be much smaller than H2O. We also observe that anion size plays an important role in the dynamics of ionic liquids. The computed density profiles of the ionic liquid/vapor interface exhibit oscillatory behavior, indicative of surface layering at the interface. Further analysis reveals that the [dmim]+ ions show preferred orientation at the interface with the ring parallel to the surface and methyl group attached to the ring pointing into the vapor phase. The computed surface tensions indicated small differences between these ionic liquids and are inline with recent experimental measurements. The calculated potential drops of these ionic liquids are found to be small and negative. These results could imply that the cation dipoles are likely to orient in the plane that parallel to the surface normal axis. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Chemical Sciences program. The Pacific Northwest National Laboratory is operated by Battelle for DOE.

  8. Computational modelling flow and transport

    NARCIS (Netherlands)

    Stelling, G.S.; Booij, N.

    1999-01-01

    Lecture notes CT wa4340. Derivation of equations using balance principles; numerical treatment of ordinary differential equations; time dependent partial differential equations; the strucure of a computer model:DUFLO; usage of numerical models.

  9. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  10. The Impact of Subjective Norm and Facilitating Conditions on Pre-Service Teachers' Attitude toward Computer Use: A Structural Equation Modeling of an Extended Technology Acceptance Model

    Science.gov (United States)

    Teo, Timothy

    2009-01-01

    This study examined pre-service teachers' self-report on their attitude toward computer use. Participants were 285 pre-service teachers at a teacher training institution in Singapore. They completed a survey questionnaire measuring their responses to five constructs which formed a research model using the Technology Acceptance Model (TAM) as a…

  11. Computation models of discourse

    Energy Technology Data Exchange (ETDEWEB)

    Brady, M.; Berwick, R.C.

    1983-01-01

    This book presents papers on artificial intelligence and natural language. Topics considered include recognizing intentions from natural language utterances, cooperative responses from a portable natural language database query system, natural language generation as a computational problem, focusing in the comprehension of definite anaphora, and factors in forming discourse-dependent descriptions.

  12. Computational strategies to address chromatin structure problems

    Science.gov (United States)

    Perišić, Ognjen; Schlick, Tamar

    2016-06-01

    While the genetic information is contained in double helical DNA, gene expression is a complex multilevel process that involves various functional units, from nucleosomes to fully formed chromatin fibers accompanied by a host of various chromatin binding enzymes. The chromatin fiber is a polymer composed of histone protein complexes upon which DNA wraps, like yarn upon many spools. The nature of chromatin structure has been an open question since the beginning of modern molecular biology. Many experiments have shown that the chromatin fiber is a highly dynamic entity with pronounced structural diversity that includes properties of idealized zig-zag and solenoid models, as well as other motifs. This diversity can produce a high packing ratio and thus inhibit access to a majority of the wound DNA. Despite much research, chromatin’s dynamic structure has not yet been fully described. Long stretches of chromatin fibers exhibit puzzling dynamic behavior that requires interpretation in the light of gene expression patterns in various tissue and organisms. The properties of chromatin fiber can be investigated with experimental techniques, like in vitro biochemistry, in vivo imagining, and high-throughput chromosome capture technology. Those techniques provide useful insights into the fiber’s structure and dynamics, but they are limited in resolution and scope, especially regarding compact fibers and chromosomes in the cellular milieu. Complementary but specialized modeling techniques are needed to handle large floppy polymers such as the chromatin fiber. In this review, we discuss current approaches in the chromatin structure field with an emphasis on modeling, such as molecular dynamics and coarse-grained computational approaches. Combinations of these computational techniques complement experiments and address many relevant biological problems, as we will illustrate with special focus on epigenetic modulation of chromatin structure.

  13. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    with them. As the required models may be complex and require multiple time and/or length scales, their development and application for product-process design is not trivial. Therefore, a systematic modeling framework can contribute by significantly reducing the time and resources needed for model...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  14. Structural models of zebrafish (Danio rerio NOD1 and NOD2 NACHT domains suggest differential ATP binding orientations: insights from computational modeling, docking and molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Jitendra Maharana

    Full Text Available Nucleotide-binding oligomerization domain-containing protein 1 (NOD1 and NOD2 are cytosolic pattern recognition receptors playing pivotal roles in innate immune signaling. NOD1 and NOD2 recognize bacterial peptidoglycan derivatives iE-DAP and MDP, respectively and undergoes conformational alternation and ATP-dependent self-oligomerization of NACHT domain followed by downstream signaling. Lack of structural adequacy of NACHT domain confines our understanding about the NOD-mediated signaling mechanism. Here, we predicted the structure of NACHT domain of both NOD1 and NOD2 from model organism zebrafish (Danio rerio using computational methods. Our study highlighted the differential ATP binding modes in NOD1 and NOD2. In NOD1, γ-phosphate of ATP faced toward the central nucleotide binding cavity like NLRC4, whereas in NOD2 the cavity was occupied by adenine moiety. The conserved 'Lysine' at Walker A formed hydrogen bonds (H-bonds and Aspartic acid (Walker B formed electrostatic interaction with ATP. At Sensor 1, Arg328 of NOD1 exhibited an H-bond with ATP, whereas corresponding Arg404 of NOD2 did not. 'Proline' of GxP motif (Pro386 of NOD1 and Pro464 of NOD2 interacted with adenine moiety and His511 at Sensor 2 of NOD1 interacted with γ-phosphate group of ATP. In contrast, His579 of NOD2 interacted with the adenine moiety having a relatively inverted orientation. Our findings are well supplemented with the molecular interaction of ATP with NLRC4, and consistent with mutagenesis data reported for human, which indicates evolutionary shared NOD signaling mechanism. Together, this study provides novel insights into ATP binding mechanism, and highlights the differential ATP binding modes in zebrafish NOD1 and NOD2.

  15. Computational models of syntactic acquisition.

    Science.gov (United States)

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  16. A computational model of spatio-temporal cardiac intracellular calcium handling with realistic structure and spatial flux distribution from sarcoplasmic reticulum and t-tubule reconstructions.

    Directory of Open Access Journals (Sweden)

    Michael A Colman

    2017-08-01

    Full Text Available Intracellular calcium cycling is a vital component of cardiac excitation-contraction coupling. The key structures responsible for controlling calcium dynamics are the cell membrane (comprising the surface sarcolemma and transverse-tubules, the intracellular calcium store (the sarcoplasmic reticulum, and the co-localisation of these two structures to form dyads within which calcium-induced-calcium-release occurs. The organisation of these structures tightly controls intracellular calcium dynamics. In this study, we present a computational model of intracellular calcium cycling in three-dimensions (3-D, which incorporates high resolution reconstructions of these key regulatory structures, attained through imaging of tissue taken from the sheep left ventricle using serial block face scanning electron microscopy. An approach was developed to model the sarcoplasmic reticulum structure at the whole-cell scale, by reducing its full 3-D structure to a 3-D network of one-dimensional strands. The model reproduces intracellular calcium dynamics during control pacing and reveals the high-resolution 3-D spatial structure of calcium gradients and intracellular fluxes in both the cytoplasm and sarcoplasmic reticulum. We also demonstrated the capability of the model to reproduce potentially pro-arrhythmic dynamics under perturbed conditions, pertaining to calcium-transient alternans and spontaneous release events. Comparison with idealised cell models emphasised the importance of structure in determining calcium gradients and controlling the spatial dynamics associated with calcium-transient alternans, wherein the probabilistic nature of dyad activation and recruitment was constrained. The model was further used to highlight the criticality in calcium spark propagation in relation to inter-dyad distances. The model presented provides a powerful tool for future investigation of structure-function relationships underlying physiological and pathophysiological

  17. Computational modeling of epithelial tissues.

    Science.gov (United States)

    Smallwood, Rod

    2009-01-01

    There is an extensive literature on the computational modeling of epithelial tissues at all levels from subcellular to whole tissue. This review concentrates on behavior at the individual cell to whole tissue level, and particularly on organizational aspects, and provides an indication of where information from other areas, such as the modeling of angiogenesis, is relevant. The skin, and the lining of all of the body cavities (lung, gut, cervix, bladder etc) are epithelial tissues, which in a topological sense are the boundary between inside and outside the body. They are thin sheets of cells (usually of the order of 0.5 mm thick) without extracellular matrix, have a relatively simple structure, and contain few types of cells. They have important barrier, secretory and transport functions, which are essential for the maintenance of life, so homeostasis and wound healing are important aspects of the behavior of epithelial tissues. Carcinomas originate in epithelial tissues.There are essentially two approaches to modeling tissues--to start at the level of the tissue (i.e., a length scale of the order of 1 mm) and develop generalized equations for behavior (a continuum approach); or to start at the level of the cell (i.e., a length scale of the order of 10 µm) and develop tissue behavior as an emergent property of cellular behavior (an individual-based approach). As will be seen, these are not mutually exclusive approaches, and they come in a variety of flavors.

  18. toolkit computational mesh conceptual model.

    Energy Technology Data Exchange (ETDEWEB)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  19. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  20. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  1. Integrated Computational Model Development

    Science.gov (United States)

    2014-03-01

    68.5%, 9.6% and 21.9%, respectively. The alloy density and Vickers microhardness were ρ = 8.23 ± 0.01 g/cm3 and Hv = 5288 ± 1 MPa. [3...and 3-D. Techniques to mechanically test materials at smaller scales were developed to better inform the deformation models. Also methods were...situ microscale tension testing technique was adapted to enable microscale fatigue testing on tensile dog-bone specimens. Microscale tensile fatigue

  2. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  3. Component Breakout Computer Model

    Science.gov (United States)

    1987-04-29

    Weapon Systems: A Policy Analysis." The Rand Graduate Institute. November 1983. Boger . D. "Statistical Models for Estimating Overhead Costs." M. S...SQUARE SCREEN PROGRAM BO DLS 70 LOCATE 3,5 100 PRINT " I I I I I I I I I I I I I I t I I I t I I i iiitiii I I I I i t I i 110 LOCATE 4,5 I 20...GOTO 4620 4610 REM ***********«««*«««**#«***********#******»,*###!^5|[^,„<c#,5|c„ dl -r C^M EED SUPPORT .c.50 REM A6(6)...N0 OF EMPLOYEES 4660 IF

  4. Efficient Computational Model of Hysteresis

    Science.gov (United States)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  5. A Computational Framework for Realistic Retina Modeling.

    Science.gov (United States)

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  6. A computational fluid-structure interaction model to predict the biomechanical properties of the artificial functionally graded aorta.

    Science.gov (United States)

    Khosravi, Arezoo; Bani, Milad Salimi; Bahreinizade, Hossein; Karimi, Alireza

    2016-12-01

    In the present study, three layers of the ascending aorta in respect to the time and space at various blood pressures have been simulated. Two well-known commercial finite element (FE) software have used to be able to provide a range of reliable numerical results while independent on the software type. The radial displacement compared with the time as well as the peripheral stress and von Mises stress of the aorta have calculated. The aorta model was validated using the differential quadrature method (DQM) solution and, then, in order to design functionally graded materials (FGMs) with different heterogeneous indexes for the artificial vessel, two different materials have been employed. Fluid-structure interaction (FSI) simulation has been carried out on the FGM and a natural vessel of the human body. The heterogeneous index defines the variation of the length in a function. The blood pressure was considered to be a function of both the time and location. Finally, the response characteristics of functionally graded biomaterials (FGBMs) models with different values of heterogeneous material parameters were determined and compared with the behaviour of a natural vessel. The results showed a very good agreement between the numerical findings of the FGM materials and that of the natural vessel. The findings of the present study may have implications not only to understand the performance of different FGMs in bearing the stress and deformation in comparison with the natural human vessels, but also to provide information for the biomaterials expert to be able to select a suitable material as an implant for the aorta.

  7. A computational fluid–structure interaction model to predict the biomechanical properties of the artificial functionally graded aorta

    Science.gov (United States)

    Khosravi, Arezoo; Bani, Milad Salimi; Bahreinizade, Hossein; Karimi, Alireza

    2016-01-01

    In the present study, three layers of the ascending aorta in respect to the time and space at various blood pressures have been simulated. Two well-known commercial finite element (FE) software have used to be able to provide a range of reliable numerical results while independent on the software type. The radial displacement compared with the time as well as the peripheral stress and von Mises stress of the aorta have calculated. The aorta model was validated using the differential quadrature method (DQM) solution and, then, in order to design functionally graded materials (FGMs) with different heterogeneous indexes for the artificial vessel, two different materials have been employed. Fluid–structure interaction (FSI) simulation has been carried out on the FGM and a natural vessel of the human body. The heterogeneous index defines the variation of the length in a function. The blood pressure was considered to be a function of both the time and location. Finally, the response characteristics of functionally graded biomaterials (FGBMs) models with different values of heterogeneous material parameters were determined and compared with the behaviour of a natural vessel. The results showed a very good agreement between the numerical findings of the FGM materials and that of the natural vessel. The findings of the present study may have implications not only to understand the performance of different FGMs in bearing the stress and deformation in comparison with the natural human vessels, but also to provide information for the biomaterials expert to be able to select a suitable material as an implant for the aorta. PMID:27836981

  8. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  9. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  10. Improving Students' Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture

    Science.gov (United States)

    Springer, Michael T.

    2014-01-01

    Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…

  11. Improving Students' Understanding of Molecular Structure through Broad-Based Use of Computer Models in the Undergraduate Organic Chemistry Lecture

    Science.gov (United States)

    Springer, Michael T.

    2014-01-01

    Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…

  12. Computational models of adult neurogenesis

    Science.gov (United States)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  13. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  14. A combination of hand-held models and computer imaging programs helps students answer oral questions about molecular structure and function: a controlled investigation of student learning.

    Science.gov (United States)

    Harris, Michelle A; Peck, Ronald F; Colton, Shannon; Morris, Jennifer; Chaibub Neto, Elias; Kallio, Julie

    2009-01-01

    We conducted a controlled investigation to examine whether a combination of computer imagery and tactile tools helps introductory cell biology laboratory undergraduate students better learn about protein structure/function relationships as compared with computer imagery alone. In all five laboratory sections, students used the molecular imaging program, Protein Explorer (PE). In the three experimental sections, three-dimensional physical models were made available to the students, in addition to PE. Student learning was assessed via oral and written research summaries and videotaped interviews. Differences between the experimental and control group students were not found in our typical course assessments such as research papers, but rather were revealed during one-on-one interviews with students at the end of the semester. A subset of students in the experimental group produced superior answers to some higher-order interview questions as compared with students in the control group. During the interview, students in both groups preferred to use either the hand-held models alone or in combination with the PE imaging program. Students typically did not use any tools when answering knowledge (lower-level thinking) questions, but when challenged with higher-level thinking questions, students in both the control and experimental groups elected to use the models.

  15. Computational Chemistry Using Modern Electronic Structure Methods

    Science.gov (United States)

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  16. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction...... of the classical rhetoric term of ’prosopopoeia’ into the debate on large technological systems. First, the paper introduces the paradoxical distinction/complicity by debating Gilbert Simondon’s notion of a ‘margin of indeterminacy’ vis-a-vis computing. Second, it debates the idea of prosopopoietic modeling......, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic aspects of John von Neumann’s First Draft of a Report on the EDVAC from 1945. b. Herbert Simon’s notion of simulation in The Science of the Artificial from the 1970s. c...

  17. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    Science.gov (United States)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  18. Topological structures in computer science

    Directory of Open Access Journals (Sweden)

    Efim Khalimsky

    1987-01-01

    Full Text Available Topologies of finite spaces and spaces with countably many points are investigated. It is proven, using the theory of ordered topological spaces, that any topology in connected ordered spaces, with finitely many points or in spaces similar to the set of all integers, is an interval-alternating topology. Integer and digital lines, arcs, and curves are considered. Topology of N-dimensional digital spaces is described. A digital analog of the intermediate value theorem is proven. The equivalence of connectedness and pathconnectedness in digital and integer spaces is also proven. It is shown here how methods of continuous mathematics, for example, topological methods, can be applied to objects, that used to be investigated only by methods of discrete mathematics. The significance of methods and ideas in digital image and picture processing, robotic vision, computer tomography and system's sciences presented here is well known.

  19. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  20. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  1. A Dualistic Model To Describe Computer Architectures

    Science.gov (United States)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  2. Computer Profiling Based Model for Investigation

    Directory of Open Access Journals (Sweden)

    Neeraj Choudhary

    2011-10-01

    Full Text Available Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the informationneeded to decide whether manual analysis is required.

  3. Novel computational methodologies for structural modeling of spacious ligand binding sites of G-protein-coupled receptors: development and application to human leukotriene B4 receptor.

    Science.gov (United States)

    Ishino, Yoko; Harada, Takanori

    2012-01-01

    This paper describes a novel method to predict the activated structures of G-protein-coupled receptors (GPCRs) with high accuracy, while aiming for the use of the predicted 3D structures in in silico virtual screening in the future. We propose a new method for modeling GPCR thermal fluctuations, where conformation changes of the proteins are modeled by combining fluctuations on multiple time scales. The core idea of the method is that a molecular dynamics simulation is used to calculate average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the picosecond or nanosecond time scale, and then evolutionary computation including receptor-ligand docking simulations functions to determine the rotation angle of each helix of a GPCR protein as a movement on a longer time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the proposed method was able to derive the appropriate 3D structure of the active-state GPCR which docks with its agonists.

  4. Novel Computational Methodologies for Structural Modeling of Spacious Ligand Binding Sites of G-Protein-Coupled Receptors: Development and Application to Human Leukotriene B4 Receptor

    Directory of Open Access Journals (Sweden)

    Yoko Ishino

    2012-01-01

    Full Text Available This paper describes a novel method to predict the activated structures of G-protein-coupled receptors (GPCRs with high accuracy, while aiming for the use of the predicted 3D structures in in silico virtual screening in the future. We propose a new method for modeling GPCR thermal fluctuations, where conformation changes of the proteins are modeled by combining fluctuations on multiple time scales. The core idea of the method is that a molecular dynamics simulation is used to calculate average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the picosecond or nanosecond time scale, and then evolutionary computation including receptor-ligand docking simulations functions to determine the rotation angle of each helix of a GPCR protein as a movement on a longer time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the proposed method was able to derive the appropriate 3D structure of the active-state GPCR which docks with its agonists.

  5. Computation accuracy of flow conditions around a very large floating structure using a multi-layer model. Comparison with experimental results; Taso model ni yoru choogata futai mawari no ryukyo keisan seido ni tsuite. Jikken tono hikaku

    Energy Technology Data Exchange (ETDEWEB)

    Kyotsuka, Y. [Kyushu University, Fukuoka (Japan); Omori, H.; Nakagawa, H.; Kobayashi, M. [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1996-04-10

    As one of the environmental problems in sea areas surrounding a very large floating structure (VLFS), change in flow condition is important, and it is one of the factors dominating the prediction of succeeding diffusion and ecosystems. Although a multi-layer model is in wide use for computation of flow condition and diffusion in one inner bay, its applicability should be reexamined because of no consideration of VLFSs. In this study, flow velocity profiles around a barge were then measured through the towing test of a barge in shallow water, and compared with computation results using a multi-layer model. The multi-layer model computed the flow velocity profiles by dividing the flow region to be computed into normal one and that under VLFS, and determined pressures under VLFS by 2-D Poisson`s equation. Slip condition was used as boundary condition at the bottom considering the number of layers under VLFS. Further numerical computation was conducted by 2-D MAC method, in particular, to compare flow around the wake of VLFS with experimental one. Both computation results well agreed with experimental one. 3 refs., 9 figs., 1 tab.

  6. Computational modeling of protein mutant stability: analysis and optimization of statistical potentials and structural features reveal insights into prediction model development

    Directory of Open Access Journals (Sweden)

    Abhinandan Madenhalli

    2007-08-01

    Full Text Available Abstract Background Understanding and predicting protein stability upon point mutations has wide-spread importance in molecular biology. Several prediction models have been developed in the past with various algorithms. Statistical potentials are one of the widely used algorithms for the prediction of changes in stability upon point mutations. Although the methods provide flexibility and the capability to develop an accurate and reliable prediction model, it can be achieved only by the right selection of the structural factors and optimization of their parameters for the statistical potentials. In this work, we have selected five atom classification systems and compared their efficiency for the development of amino acid atom potentials. Additionally, torsion angle potentials have been optimized to include the orientation of amino acids in such a way that altered backbone conformation in different secondary structural regions can be included for the prediction model. This study also elaborates the importance of classifying the mutations according to their solvent accessibility and secondary structure specificity. The prediction efficiency has been calculated individually for the mutations in different secondary structural regions and compared. Results Results show that, in addition to using an advanced atom description, stepwise regression and selection of atoms are necessary to avoid the redundancy in atom distribution and improve the reliability of the prediction model validation. Comparing to other atom classification models, Melo-Feytmans model shows better prediction efficiency by giving a high correlation of 0.85 between experimental and theoretical ΔΔG with 84.06% of the mutations correctly predicted out of 1538 mutations. The theoretical ΔΔG values for the mutations in partially buried β-strands generated by the structural training dataset from PISCES gave a correlation of 0.84 without performing the Gaussian apodization of the

  7. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  8. Computational algebraic geometry of epidemic models

    Science.gov (United States)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  9. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system. [Controls/Structure Interaction

    Science.gov (United States)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  10. FORENSIC COMPUTING MODELS: TECHNICAL OVERVIEW

    Directory of Open Access Journals (Sweden)

    Gulshan Shrivastava

    2012-05-01

    Full Text Available In this paper, we deal with introducing a technique of digital forensics for reconstruction of events or evidences after the commitment of a crime through any of the digital devices. It shows a clear transparency between Computer Forensics and Digital Forensics and gives a brief description about the classification of Digital Forensics. It has also been described that how the emergences of various digital forensic models help digital forensic practitioners and examiners in doing digital forensics. Further, discussed Merits and Demerits of the required models and review of every major model.

  11. Pervasive Computing and Prosopopoietic Modelling

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2011-01-01

    that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...... into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction......, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic aspects of John von Neumann’s First Draft of a Report on the EDVAC from 1945. b. Herbert Simon’s notion of simulation in The Science of the Artificial from the 1970s. c...

  12. X-ray, Cryo-EM, and computationally predicted protein structures used in integrative modeling of HIV Env glycoprotein gp120 in complex with CD4 and 17b

    Directory of Open Access Journals (Sweden)

    Muhibur Rasheed

    2016-03-01

    Full Text Available We present the data used for an integrative approach to computational modeling of proteins with large variable domains, specifically applied in this context to model HIV Env glycoprotein gp120 in its CD4 and 17b bound state. The initial data involved X-ray structure PDBID:1GC1 and electron microscopy image EMD:5020. Other existing X-ray structures were used as controls to validate and hierarchically refine partial and complete computational models. A summary of the experiment protocol and data was published (Rasheed et al., 2015 [26], along with detailed analysis of the final model (PDBID:3J70 and its implications.

  13. Reflexive structures an introduction to computability theory

    CERN Document Server

    Sanchis, Luis E

    1988-01-01

    Reflexive Structures: An Introduction to Computability Theory is concerned with the foundations of the theory of recursive functions. The approach taken presents the fundamental structures in a fairly general setting, but avoiding the introduction of abstract axiomatic domains. Natural numbers and numerical functions are considered exclusively, which results in a concrete theory conceptually organized around Church's thesis. The book develops the important structures in recursive function theory: closure properties, reflexivity, enumeration, and hyperenumeration. Of particular interest is the treatment of recursion, which is considered from two different points of view: via the minimal fixed point theory of continuous transformations, and via the well known stack algorithm. Reflexive Structures is intended as an introduction to the general theory of computability. It can be used as a text or reference in senior undergraduate and first year graduate level classes in computer science or mathematics.

  14. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  15. Cosmic logic: a computational model

    Science.gov (United States)

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  16. Towards a Tool for Computer Supported Structuring of Products

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp

    1997-01-01

    . However, a product possesses not only a component structure but also various organ structures which are superimposed on the component structure. The organ structures carry behaviour and make the product suited for its life phases.Our long-term research goal is to develop a computer-based system...... that is capable of supporting synthesis activities in engineering design, and thereby also support handling of various organ structures. Such a system must contain a product model, in which it is possible to describe and manipulate both various organ structures and the component structure.In this paper we focus...

  17. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  18. Computer model of tetrahedral amorphous diamond

    Science.gov (United States)

    Djordjević, B. R.; Thorpe, M. F.; Wooten, F.

    1995-08-01

    We computer generate a model of amorphous diamond using the Wooten-Weaire method, with fourfold coordination everywhere. We investigate two models: one where four-membered rings are allowed and the other where the four-membered rings are forbidden; each model consisting of 4096 atoms. Starting from the perfect diamond crystalline structure, we first randomize the structure by introducing disorder through random bond switches at a sufficiently high temperature. Subsequently, the temperature is reduced in stages, and the topological and geometrical relaxation of the structure takes place using the Keating potential. After a long annealing process, a random network of comparatively low energy is obtained. We calculate the pair distribution function, mean bond angle, rms angular deviation, rms bond length, rms bond-length deviation, and ring statistics for the final relaxed structures. We minimize the total strain energy by adjusting the density of the sample. We compare our results with similar computer-generated models for amorphous silicon, and with experimental measurement of the structure factor for (predominantly tetrahedral) amorphous carbon.

  19. Computational structures technology at Grumman: Current practice/future needs

    Science.gov (United States)

    Pifko, Allan B.; Eidinoff, Harvey

    1992-05-01

    The current practice for the design analysis of new airframe structural systems is to construct a master finite element model of the vehicle in order to develop internal load distributions. The inputs to this model include the geometry which is taken directly from CADAM and CATIA structural layout and aerodynamic loads and mass distribution computer models. This master model is sufficiently detailed to define major load paths and for the computation of dynamic mode shapes and structural frequencies, but not detailed enough to define local stress gradients and notch stresses. This master model is then used to perform structural optimization studies that will provide minimum weights for major structural members. The post-processed output from the master model, load, stress, and strain analysis is then used by structural analysts to perform detailed stress analysis of local regions in order to design local structure with all its required details. This local analysis consists of hand stress analysis and life prediction analysis with the assistance of manuals, design charts, computer stress and structural life analysis and sometimes finite element or boundary element analysis. The resulting design is verified by fatigue tests.

  20. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  1. Computer Modelling of 3D Geological Surface

    CERN Document Server

    Kodge, B G

    2011-01-01

    The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

  2. Computing the partition function for kinetically trapped RNA secondary structures.

    Directory of Open Access Journals (Sweden)

    William A Lorenz

    Full Text Available An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in O(n3 time and O(n2 space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1 the number of locally optimal structures is far fewer than the total number of structures--indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2 the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3 the (modified maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected

  3. Development of a Computer Application to Simulate Porous Structures

    Directory of Open Access Journals (Sweden)

    S.C. Reis

    2002-09-01

    Full Text Available Geometric modeling is an important tool to evaluate structural parameters as well as to follow the application of stereological relationships. The obtention, visualization and analysis of volumetric images of the structure of materials, using computational geometric modeling, facilitates the determination of structural parameters of difficult experimental access, such as topological and morphological parameters. In this work, we developed a geometrical model implemented by computer software that simulates random pore structures. The number of nodes, number of branches (connections between nodes and the number of isolated parts, are obtained. Also, the connectivity (C is obtained from this application. Using a list of elements, nodes and branches, generated by the software, in AutoCAD® command line format, the obtained structure can be viewed and analyzed.

  4. Cosmic Logic: a Computational Model

    CERN Document Server

    Vanchurin, Vitaly

    2015-01-01

    We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or G{\\" o}del number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies...

  5. Large vortex-like structure of dipole field in computer models of liquid water and dipole-bridge between biomolecules.

    Science.gov (United States)

    Higo, J; Sasai, M; Shirai, H; Nakamura, H; Kugimiya, T

    2001-05-22

    We propose a framework to describe the cooperative orientational motions of water molecules in liquid water and around solute molecules in water solutions. From molecular dynamics (MD) simulation a new quantity "site-dipole field" is defined as the averaged orientation of water molecules that pass through each spatial position. In the site-dipole field of bulk water we found large vortex-like structures of more than 10 A in size. Such coherent patterns persist more than 300 ps although the orientational memory of individual molecules is quickly lost. A 1-ns MD simulation of systems consisting of two amino acids shows that the fluctuations of site-dipole field of solvent are pinned around the amino acids, resulting in a stable dipole-bridge between side-chains of amino acids. The dipole-bridge is significantly formed even for the side-chain separation of 14 A, which corresponds to five layers of water. The way that dipole-bridge forms sensitively depends on the side-chain orientations and thereby explains the specificity in the solvent-mediated interactions between biomolecules.

  6. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  7. Synthesis, structure determination, and spectroscopic/computational characterization of a series of Fe(II)-thiolate model complexes: implications for Fe-S bonding in superoxide reductases.

    Science.gov (United States)

    Fiedler, Adam T; Halfen, Heather L; Halfen, Jason A; Brunold, Thomas C

    2005-02-16

    A combined synthetic/spectroscopic/computational approach has been employed to prepare and characterize a series of Fe(II)-thiolate complexes that model the square-pyramidal [Fe(II)(N(His))(4)(S(Cys))] structure of the reduced active site of superoxide reductases (SORs), a class of enzymes that detoxify superoxide in air-sensitive organisms. The high-spin (S = 2) Fe(II) complexes [(Me(4)cyclam)Fe(SC(6)H(4)-p-OMe)]OTf (2) and [FeL]PF(6) (3) (where Me(4)cyclam = 1,4,8,11-tetramethylcyclam and L is the pentadentate monoanion of 1-thioethyl-4,8,11-trimethylcyclam) were synthesized and subjected to structural, magnetic, and electrochemical characterization. X-ray crystallographic studies confirm that 2 and 3 possess an N(4)S donor set similar to that found for the SOR active site and reveal molecular geometries intermediate between square pyramidal and trigonal bipyramidal for both complexes. Electronic absorption, magnetic circular dichroism (MCD), and variable-temperature variable-field MCD (VTVH-MCD) spectroscopies were utilized, in conjunction with density functional theory (DFT) and semiemperical INDO/S-CI calculations, to probe the ground and excited states of complexes 2 and 3, as well as the previously reported Fe(II) SOR model [(L(8)py(2))Fe(SC(6)H(4)-p-Me)]BF(4) (1) (where L(8)py(2) is a tetradentate pyridyl-appended diazacyclooctane macrocycle). These studies allow for a detailed interpretation of the S-->Fe(II) charge transfer transitions observed in the absorption and MCD spectra of complexes 1-3 and provide significant insights into the nature of Fe(II)-S bonding in complexes with axial thiolate ligation. Of the three models investigated, complex 3 exhibits an absorption spectrum that is particularly similar to the one reported for the reduced SOR enzyme (SOR(red)), suggesting that this model accurately mimics key elements of the electronic structure of the enzyme active site; namely, highly covalent Fe-S pi- and sigma-interactions. These spectral

  8. Computational challenges of structure-based approaches applied to HIV.

    Science.gov (United States)

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  9. Image processing and computing in structural biology

    NARCIS (Netherlands)

    Jiang, Linhua

    2009-01-01

    With the help of modern techniques of imaging processing and computing, image data obtained by electron cryo-microscopy of biomolecules can be reconstructed to three-dimensional biological models at sub-nanometer resolution. These models allow answering urgent problems in life science, for instance,

  10. Regularized Structural Equation Modeling.

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J; McArdle, John J

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.

  11. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology...... adoption theories, such as Diffusion of Innovations, Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology. Further on, at research model for identification of Cloud Computing Adoption factors from a business model perspective is presented. The following business model building...

  12. Business model elements impacting cloud computing adoption

    DEFF Research Database (Denmark)

    Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek

    adoption theories, such as Diffusion of Innovations, Technology Acceptance Model, Unified Theory of Acceptance and Use of Technology. Further on, at research model for identification of Cloud Computing Adoption factors from a business model perspective is presented. The following business model building......The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology...

  13. Performance Driven Design and Design Information Exchange: Establishing a computational design methodology for parametric and performance-driven design of structures via topology optimization for rough structurally informed design models

    NARCIS (Netherlands)

    Mostafavi, S.; Morales Beltran, M.G.; Biloria, N.M.

    2013-01-01

    This paper presents a performance driven computational design methodology through introducing a case on parametric structural design. The paper describes the process of design technology development and frames a design methodology through which engineering, -in this case structural- aspects of

  14. Los Alamos Center for Computer Security formal computer security model

    Energy Technology Data Exchange (ETDEWEB)

    Dreicer, J.S.; Hunteman, W.J.; Markin, J.T.

    1989-01-01

    This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The need to test and verify DOE computer security policy implementation first motivated this effort. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present formal mathematical models for computer security. The fundamental objective of computer security is to prevent the unauthorized and unaccountable access to a system. The inherent vulnerabilities of computer systems result in various threats from unauthorized access. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The model is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell and LaPadula abstract sets of objects and subjects. 6 refs.

  15. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.

  16. PSPP: a protein structure prediction pipeline for computing clusters.

    Directory of Open Access Journals (Sweden)

    Michael S Lee

    Full Text Available BACKGROUND: Protein structures are critical for understanding the mechanisms of biological systems and, subsequently, for drug and vaccine design. Unfortunately, protein sequence data exceed structural data by a factor of more than 200 to 1. This gap can be partially filled by using computational protein structure prediction. While structure prediction Web servers are a notable option, they often restrict the number of sequence queries and/or provide a limited set of prediction methodologies. Therefore, we present a standalone protein structure prediction software package suitable for high-throughput structural genomic applications that performs all three classes of prediction methodologies: comparative modeling, fold recognition, and ab initio. This software can be deployed on a user's own high-performance computing cluster. METHODOLOGY/PRINCIPAL FINDINGS: The pipeline consists of a Perl core that integrates more than 20 individual software packages and databases, most of which are freely available from other research laboratories. The query protein sequences are first divided into domains either by domain boundary recognition or Bayesian statistics. The structures of the individual domains are then predicted using template-based modeling or ab initio modeling. The predicted models are scored with a statistical potential and an all-atom force field. The top-scoring ab initio models are annotated by structural comparison against the Structural Classification of Proteins (SCOP fold database. Furthermore, secondary structure, solvent accessibility, transmembrane helices, and structural disorder are predicted. The results are generated in text, tab-delimited, and hypertext markup language (HTML formats. So far, the pipeline has been used to study viral and bacterial proteomes. CONCLUSIONS: The standalone pipeline that we introduce here, unlike protein structure prediction Web servers, allows users to devote their own computing assets to process a

  17. Structured grid generator on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Muramatsu, Kazuhiro; Murakami, Hiroyuki; Higashida, Akihiro; Yanagisawa, Ichiro

    1997-03-01

    A general purpose structured grid generator on parallel computers, which generates a large-scale structured grid efficiently, has been developed. The generator is applicable to Cartesian, cylindrical and BFC (Boundary-Fitted Curvilinear) coordinates. In case of BFC grids, there are three adaptable topologies; L-type, O-type and multi-block type, the last of which enables any combination of L- and O-grids. Internal BFC grid points can be automatically generated and smoothed by either algebraic supplemental method or partial differential equation method. The partial differential equation solver is implemented on parallel computers, because it consumes a large portion of overall execution time. Therefore, high-speed processing of large-scale grid generation can be realized by use of parallel computer. Generated grid data are capable to be adjusted to domain decomposition for parallel analysis. (author)

  18. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  19. PRODUCT STRUCTURE DIGITAL MODEL

    Directory of Open Access Journals (Sweden)

    V.M. Sineglazov

    2005-02-01

    Full Text Available  Research results of representation of product structure made by means of CADDS5 computer-aided design (CAD system, Product Data Management Optegra (PDM system and Product Life Cycle Management Wind-chill system (PLM, are examined in this work. Analysis of structure component development and its storage in various systems is carried out. Algorithms of structure transformation required for correct representation of the structure are considered. Management analysis of electronic mockup presentation of the product structure is carried out for Windchill system.

  20. Computer modeling of piezoresistive gauges

    Energy Technology Data Exchange (ETDEWEB)

    Nutt, G. L.; Hallquist, J. O.

    1981-08-07

    A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medum subjected to a 0.5 Gp plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 2.7 Gp shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shocked medium.

  1. Towards the Epidemiological Modeling of Computer Viruses

    OpenAIRE

    Xiaofan Yang; Lu-Xing Yang

    2012-01-01

    Epidemic dynamics of computer viruses is an emerging discipline aiming to understand the way that computer viruses spread on networks. This paper is intended to establish a series of rational epidemic models of computer viruses. First, a close inspection of some common characteristics shared by all typical computer viruses clearly reveals the flaws of previous models. Then, a generic epidemic model of viruses, which is named as the SLBS model, is proposed. Finally, diverse generalizations of ...

  2. Robust and portable capacity computing method for many finite element analyses of a high-fidelity crustal structure model aimed for coseismic slip estimation

    Science.gov (United States)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hirahara, Kazuro; Hyodo, Mamoru; Hori, Takane; Hori, Muneo

    2016-09-01

    Computation of many Green's functions (GFs) in finite element (FE) analyses of crustal deformation is an essential technique in inverse analyses of coseismic slip estimations. In particular, analysis based on a high-resolution FE model (high-fidelity model) is expected to contribute to the construction of a community standard FE model and benchmark solution. Here, we propose a naive but robust and portable capacity computing method to compute many GFs using a high-fidelity model, assuming that various types of PC clusters are used. The method is based on the master-worker model, implemented using the Message Passing Interface (MPI), to perform robust and efficient input/output operations. The method was applied to numerical experiments of coseismic slip estimation in the Tohoku region of Japan; comparison of the estimated results with those generated using lower-fidelity models revealed the benefits of using a high-fidelity FE model in coseismic slip distribution estimation. Additionally, the proposed method computes several hundred GFs more robustly and efficiently than methods without the master-worker model and MPI.

  3. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  4. Elucidating Drug-Enzyme Interactions and Their Structural Basis for Improving the Affinity and Potency of Isoniazid and Its Derivatives Based on Computer Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Auradee Punkvang

    2010-04-01

    Full Text Available The enoyl-ACP reductase enzyme (InhA from M. tuberculosis is recognized as the primary target of isoniazid (INH, a first-line antibiotic for tuberculosis treatment. To identify the specific interactions of INH-NAD adduct and its derivative adducts in InhA binding pocket, molecular docking calculations and quantum chemical calculations were performed on a set of INH derivative adducts. Reliable binding modes of INH derivative adducts in the InhA pocket were established using the Autodock 3.05 program, which shows a good ability to reproduce the X-ray bound conformation with rmsd of less than 1.0 Å. The interaction energies of the INH-NAD adduct and its derivative adducts with individual amino acids in the InhA binding pocket were computed based on quantum chemical calculations at the MP2/6-31G (d level. The molecular docking and quantum chemical calculation results reveal that hydrogen bond interactions are the main interactions for adduct binding. To clearly delineate the linear relationship between structure and activity of these adducts, CoMFA and CoMSIA models were set up based on molecular docking alignment. The resulting CoMFA and CoMSIA models are in conformity with the best statistical qualities, in which r2cv is 0.67 and 0.74, respectively. Structural requirements of isoniazid derivatives that can be incorporated into the isoniazid framework to improve the activity have been identified through CoMFA and CoMSIA steric and electrostatic contour maps. The integrated results from structure-based, ligand-based design approaches and quantum chemical calculations provide useful structural information facilitating the design of new and more potentially effective antitubercular agents as follow: the R substituents of isoniazid derivatives should contain a large plane and both sides of the plane should contain an electropositive group. Moreover, the steric and electrostatic fields of the 4-pyridyl ring are optimal for greater potency.

  5. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  6. Attraction-Based Computation of Hyperbolic Lagrangian Coherent Structures

    CERN Document Server

    Karrasch, Daniel; Haller, George

    2014-01-01

    Recent advances enable the simultaneous computation of both attracting and repelling families of Lagrangian Coherent Structures (LCS) at the same initial or final time of interest. Obtaining LCS positions at intermediate times, however, has been problematic, because either the repelling or the attracting family is unstable with respect to numerical advection in a given time direction. Here we develop a new approach to compute arbitrary positions of hyperbolic LCS in a numerically robust fashion. Our approach only involves the advection of attracting material surfaces, thereby providing accurate LCS tracking at low computational cost. We illustrate the advantages of this approach on a simple model and on a turbulent velocity data set.

  7. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    This work focuses on the development of a computer-aided modeling framework. The framework is a knowledge-based system that is built on a generic modeling language and structured on workflows for different modeling tasks. The overall objective is to support model developers and users to generate...... and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  8. Towards the Epidemiological Modeling of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2012-01-01

    Full Text Available Epidemic dynamics of computer viruses is an emerging discipline aiming to understand the way that computer viruses spread on networks. This paper is intended to establish a series of rational epidemic models of computer viruses. First, a close inspection of some common characteristics shared by all typical computer viruses clearly reveals the flaws of previous models. Then, a generic epidemic model of viruses, which is named as the SLBS model, is proposed. Finally, diverse generalizations of the SLBS model are suggested. We believe this work opens a door to the full understanding of how computer viruses prevail on the Internet.

  9. Quantum Computation Beyond the Circuit Model

    OpenAIRE

    Jordan, Stephen P.

    2008-01-01

    The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary background material for a ge...

  10. Linguistics Computation, Automatic Model Generation, and Intensions

    CERN Document Server

    Nourani, C F

    1994-01-01

    Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computational Linguistics, Reasoning Models, G-diagrams For Models, Dynamic Model Implementation, Linguistics and Logics For Artificial Intelligence

  11. Model dynamics for quantum computing

    Science.gov (United States)

    Tabakin, Frank

    2017-08-01

    A model master equation suitable for quantum computing dynamics is presented. In an ideal quantum computer (QC), a system of qubits evolves in time unitarily and, by virtue of their entanglement, interfere quantum mechanically to solve otherwise intractable problems. In the real situation, a QC is subject to decoherence and attenuation effects due to interaction with an environment and with possible short-term random disturbances and gate deficiencies. The stability of a QC under such attacks is a key issue for the development of realistic devices. We assume that the influence of the environment can be incorporated by a master equation that includes unitary evolution with gates, supplemented by a Lindblad term. Lindblad operators of various types are explored; namely, steady, pulsed, gate friction, and measurement operators. In the master equation, we use the Lindblad term to describe short time intrusions by random Lindblad pulses. The phenomenological master equation is then extended to include a nonlinear Beretta term that describes the evolution of a closed system with increasing entropy. An external Bath environment is stipulated by a fixed temperature in two different ways. Here we explore the case of a simple one-qubit system in preparation for generalization to multi-qubit, qutrit and hybrid qubit-qutrit systems. This model master equation can be used to test the stability of memory and the efficacy of quantum gates. The properties of such hybrid master equations are explored, with emphasis on the role of thermal equilibrium and entropy constraints. Several significant properties of time-dependent qubit evolution are revealed by this simple study.

  12. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  13. A computer generator for randomly layered structures

    Institute of Scientific and Technical Information of China (English)

    YU Jia-shun; HE Zhen-hua

    2004-01-01

    An algorithm is introduced in this paper for the synthesis of randomly layered earth models. Under the assumption that the layering and the physical parameters for a layer are random variables with truncated normal distributions, random numbers sampled from the distributions can be used to construct the layered structure and determine physical parameters for the layers. To demonstrate its application, random models were synthesized for the modelling of seismic ground motion amplification of a site with uncertainties in its model parameters.

  14. Locomotion without a brain: physical reservoir computing in tensegrity structures.

    Science.gov (United States)

    Caluwaerts, K; D'Haene, M; Verstraeten, D; Schrauwen, B

    2013-01-01

    Embodiment has led to a revolution in robotics by not thinking of the robot body and its controller as two separate units, but taking into account the interaction of the body with its environment. By investigating the effect of the body on the overall control computation, it has been suggested that the body is effectively performing computations, leading to the term morphological computation. Recent work has linked this to the field of reservoir computing, allowing one to endow morphologies with a theory of universal computation. In this work, we study a family of highly dynamic body structures, called tensegrity structures, controlled by one of the simplest kinds of "brains." These structures can be used to model biomechanical systems at different scales. By analyzing this extreme instantiation of compliant structures, we demonstrate the existence of a spectrum of choices of how to implement control in the body-brain composite. We show that tensegrity structures can maintain complex gaits with linear feedback control and that external feedback can intrinsically be integrated in the control loop. The various linear learning rules we consider differ in biological plausibility, and no specific assumptions are made on how to implement the feedback in a physical system.

  15. A Structural Model of Algebra Achievement: Computational Fluency and Spatial Visualisation as Mediators of the Effect of Working Memory on Algebra Achievement

    Science.gov (United States)

    Tolar, Tammy Daun; Lederberg, Amy R.; Fletcher, Jack M.

    2009-01-01

    The goal of this study was to develop and evaluate a structural model of the relations among cognitive abilities and arithmetic skills and college students' algebra achievement. The model of algebra achievement was compared to a model of performance on the Scholastic Assessment in Mathematics (SAT-M) to determine whether the pattern of relations…

  16. COMMON PHASES OF COMPUTER FORENSICS INVESTIGATION MODELS

    Directory of Open Access Journals (Sweden)

    Yunus Yusoff

    2011-06-01

    Full Text Available The increasing criminal activities using digital information as the means or targets warrant for a structured manner in dealing with them. Since 1984 when a formalized process been introduced, a great number of new and improved computer forensic investigation processes have been developed. In this paper, we reviewed a few selected investigation processes that have been produced throughout the yearsand then identified the commonly shared processes. Hopefully, with the identification of the commonly shard process, it would make it easier for the new users to understand the processes and also to serve as the basic underlying concept for the development of a new set of processes. Based on the commonly shared processes, we proposed a generic computer forensics investigation model, known as GCFIM.

  17. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  18. Modeling of soil-water-structure interaction

    DEFF Research Database (Denmark)

    Tang, Tian

    to dynamic ocean waves. The goal of this research project is to develop numerical soil models for computing realistic seabed response in the interacting offshore environment, where ocean waves, seabed and offshore structure highly interact with each other. The seabed soil models developed are based...... as the developed nonlinear soil displacements and stresses under monotonic and cyclic loading. With the FVM nonlinear coupled soil models as a basis, multiphysics modeling of wave-seabed-structure interaction is carried out. The computations are done in an open source code environment, OpenFOAM, where FVM models...... of Computational Fluid Dynamics (CFD) and structural mechanics are available. The interaction in the system is modeled in a 1-way manner: First detailed free surface CFD calculations are executed to obtain a realistic wave field around a given structure. Then the dynamic structural response, due to the motions...

  19. Disciplines, models, and computers: the path to computational quantum chemistry.

    Science.gov (United States)

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  20. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  1. A new computational structure for real-time dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Izaguirre, A. (New Jersey Inst. of Tech., Newark (United States)); Hashimoto, Minoru (Univ. of Electrocommunications, Tokyo (Japan))

    1992-08-01

    The authors present an efficient structure for the computation of robot dynamics in real time. The fundamental characteristic of this structure is the division of the computation into a high-priority synchronous task and low-priority background tasks, possibly sharing the resources of a conventional computing unit based on commercial microprocessors. The background tasks compute the inertial and gravitational coefficients as well as the forces due to the velocities of the joints. In each control sample period, the high-priority synchronous task computes the product of the inertial coefficients by the accelerations of the joints and performs the summation of the torques due to the velocities and gravitational forces. Kircanski et al. (1986) have shown that the bandwidth of the variation of joint angles and of their velocities is an order of magnitude less than the variation of joint accelerations. This result agrees with the experiments the authors have carried out using a PUMA 260 robot. Two main strategies contribute to reduce the computational burden associated with the evaluation of the dynamic equations. The first involves the use of efficient algorithms for the evaluation of the equations. The second is aimed at reducing the number of dynamic parameters by identifying beforehand the linear dependencies among these parameters, as well as carrying out a significance analysis of the parameters' contribution to the final joint torques. The actual code used to evaluate this dynamic model is entirely computer generated from experimental data, requiring no other manual intervention than performing a campaign of measurements.

  2. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  3. Computer Generated Cardiac Model For Nuclear Medicine

    Science.gov (United States)

    Hills, John F.; Miller, Tom R.

    1981-07-01

    A computer generated mathematical model of a thallium-201 myocardial image is described which is based on realistic geometric and physiological assumptions. The left ventricle is represented by an ellipsoid truncated by aortic and mitral valve planes. Initially, an image of a motionless left ventricle is calculated with the location, size, and relative activity of perfusion defects selected by the designer. The calculation includes corrections for photon attenuation by overlying structures and the relative distribution of activity within the tissues. Motion of the ventricular walls is simulated either by a weighted sum of images at different stages in the cardiac cycle or by a blurring function whose width varies with position. Camera and collimator blurring are estimated by the MTF of the system measured at a representative depth in a phantom. Statistical noise is added using a Poisson random number generator. The usefulness of this model is due to two factors: the a priori characterization of location and extent of perfusion defects and the strong visual similarity of the images to actual clinical studies. These properties should permit systematic evaluation of image processing algorithms using this model. The principles employed in developing this cardiac image model can readily be applied to the simulation of other nuclear medicine studies and to other medical imaging modalities including computed tomography, ultrasound, and digital radiography.

  4. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  5. Computer-aided design of tension structures

    OpenAIRE

    Ong, C.F.

    1992-01-01

    This thesis consists of three parts. Part I (chapters 1-4) gives a review and description of the basis for the numerical modelling of tension structures. The discussion in Part I leads to the conclusion of a need for an interactive design procedure for tension structures which is the subject under consideration in Part II (chapters 5-7). In the design of tension structures, an area which requires special attention is the dynamic response often initiated by the action of a natural wind. In Par...

  6. Model of computation for Fourier optical processors

    Science.gov (United States)

    Naughton, Thomas J.

    2000-05-01

    We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.

  7. Performance Driven Design and Design Information Exchange: Establishing a computational design methodology for parametric and performance-driven design of structures via topology optimization for rough structurally informed design models

    NARCIS (Netherlands)

    Mostafavi, S.; Morales Beltran, M.G.; Biloria, N.M.

    2013-01-01

    This paper presents a performance driven computational design methodology through introducing a case on parametric structural design. The paper describes the process of design technology development and frames a design methodology through which engineering, -in this case structural- aspects of archi

  8. Computational Tools for RF Structure Design

    CERN Document Server

    Jensen, E

    2004-01-01

    The Finite Differences Method and the Finite Element Method are the two principally employed numerical methods in modern RF field simulation programs. The basic ideas behind these methods are explained, with regard to available simulation programs. We then go through a list of characteristic parameters of RF structures, explaining how they can be calculated using these tools. With the help of these parameters, we introduce the frequency-domain and the time-domain calculations, leading to impedances and wake-fields, respectively. Subsequently, we present some readily available computer programs, which are in use for RF structure design, stressing their distinctive features and limitations. One final example benchmarks the precision of different codes for calculating the eigenfrequency and Q of a simple cavity resonator.

  9. Structural model integrity

    Science.gov (United States)

    Wallerstein, D. V.; Lahey, R. S.; Haggenmacher, G. W.

    1977-01-01

    Many of the practical aspects and problems of ensuring the integrity of a structural model are discussed, as well as the steps which have been taken in the NASTRAN system to assure that these checks can be routinely performed. Model integrity as used applies not only to the structural model but also to the loads applied to the model. Emphasis is also placed on the fact that when dealing with substructure analysis, all of the checking procedures discussed should be applied at the lowest level of substructure prior to any coupling.

  10. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism. Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  11. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    SHI ZhiWei; SHI ZhongZhi; LIU Xi; SHI ZhiPing

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism.Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  12. Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.

    Science.gov (United States)

    De Benedetti, Pier G; Fanelli, Francesca

    2010-10-01

    Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.

  13. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  14. Computational modeling of Li-ion batteries

    Science.gov (United States)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-08-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  15. Computational modeling of Li-ion batteries

    Science.gov (United States)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-12-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  16. A computational model of the human hand 93-ERI-053

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  17. Images as drivers of progress in cardiac computational modelling.

    Science.gov (United States)

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A; Bishop, Martin J; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2014-08-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved.

  18. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  19. Computational protein design quantifies structural constraints on amino acid covariation.

    Directory of Open Access Journals (Sweden)

    Noah Ollikainen

    Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.

  20. Two Classes of Models of Granular Computing

    Institute of Scientific and Technical Information of China (English)

    Daowu Pei

    2006-01-01

    This paper reviews a class of important models of granular computing which are induced by equivalence relations, or by general binary relations, or by neighborhood systems, and propose a class of models of granular computing which are induced by coverings of the given universe.

  1. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  2. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  3. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  4. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  5. Cascade recursion models of computing the temperatures of underground layers

    Institute of Scientific and Technical Information of China (English)

    HAN; Liqun; BI; Siwen; SONG; Shixin

    2006-01-01

    An RBF neural network was used to construct computational models of the underground temperatures of different layers, using ground-surface parameters and the temperatures of various underground layers. Because series recursion models also enable researchers to use above-ground surface parameters to compute the temperatures of different underground layers, this method provides a new way of using thermal infrared remote sensing to monitor the suture zones of large areas of blocks and to research thermal anomalies in geologic structures.

  6. Logic and algebraic structures in quantum computing

    CERN Document Server

    Eskandarian, Ali; Harizanov, Valentina S

    2016-01-01

    Arising from a special session held at the 2010 North American Annual Meeting of the Association for Symbolic Logic, this volume is an international cross-disciplinary collaboration with contributions from leading experts exploring connections across their respective fields. Themes range from philosophical examination of the foundations of physics and quantum logic, to exploitations of the methods and structures of operator theory, category theory, and knot theory in an effort to gain insight into the fundamental questions in quantum theory and logic. The book will appeal to researchers and students working in related fields, including logicians, mathematicians, computer scientists, and physicists. A brief introduction provides essential background on quantum mechanics and category theory, which, together with a thematic selection of articles, may also serve as the basic material for a graduate course or seminar.

  7. Statistics, Computation, and Modeling in Cosmology

    Science.gov (United States)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  8. Computational Fluid Dynamics Simulation of Multiphase Flow in Structured Packings

    Directory of Open Access Journals (Sweden)

    Saeed Shojaee

    2012-01-01

    Full Text Available A volume of fluid multiphase flow model was used to investigate the effective area and the created liquid film in the structured packings. The computational results revealed that the gas and liquid flow rates play significant roles in the effective interfacial area of the packing. In particular, the effective area increases as the flow rates of both phases increase. Numerical results were compared with the Brunazzi and SRP models, and a good agreement between them was found. Attention was given to the process of liquid film formation in both two-dimensional (2D and three-dimensional (3D models. The current study revealed that computational fluid dynamics (CFD can be used as an effective tool to provide information on the details of gas and liquid flows in complex packing geometries.

  9. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  10. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  11. Computational modeling of lipoprotein metabolism

    NARCIS (Netherlands)

    Schalkwijk, Daniël Bernardus van

    2013-01-01

    This PhD thesis contains the following chapters. The first part, containing chapter 2 and 3 mainly concerns model development. Chapter 2 describes the development of a mathematical modeling framework within which different diagnostic models based on lipoprotein profiles can be developed, and a first

  12. Structure and modeling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Novikov, E.A. [Univ. of California, San Diego, La Jolla, CA (United States)

    1995-12-31

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).

  13. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  14. Computer Aided Modelling – Opportunities and Challenges

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model-based solu......This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model...

  15. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  16. A Multi-Agent Immunology Model for Security Computer

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a computer immunology model for computersecurity , whose main components are defined as idea of Multi-Agent. It introduces the n at ural immune system on the principle, discusses the idea and characteristics of Mu lti-Agent. It gives a system model, and describes the structure and function of each agent. Also, the communication method between agents is described.

  17. Computer modeling of road bridge for simulation moving load

    Directory of Open Access Journals (Sweden)

    Miličić Ilija M.

    2016-01-01

    Full Text Available In this paper is shown computational modelling one span road structures truss bridge with the roadway on the upper belt of. Calculation models were treated as planar and spatial girders made up of 1D finite elements with applications for CAA: Tower and Bridge Designer 2016 (2nd Edition. The conducted computer simulations results are obtained for each comparison of the impact of moving load according to the recommendations of the two standards SRPS and AASHATO. Therefore, it is a variant of the bridge structure modeling application that provides Bridge Designer 2016 (2nd Edition identical modeled in an environment of Tower. As important information for the selection of a computer applications point out that the application Bridge Designer 2016 (2nd Edition we arent unable to treat the impacts moving load model under national standard - V600. .

  18. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  19. Visual and Computational Modelling of Minority Games

    OpenAIRE

    Robertas Damaševičius; Darius Ašeriškis

    2017-01-01

    The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting) of Minority Game using UAREI (User-Action-Rule-Entities-Interface) model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate...

  20. Vectorial Preisach-type model designed for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Stancu, Alexandru [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania)]. E-mail: alstancu@uaic.ro; Stoleriu, Laurentiu [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania); Andrei, Petru [Electrical and Computer Engineering, Florida State University, Tallahassee, FL (United States); Electrical and Computer Engineering, Florida A and M University, Tallahassee, FL (United States)

    2007-09-15

    Most of the hysteresis phenomenological models are scalar, while all the magnetization processes are vectorial. The vector models-phenomenological or micromagnetic (physical)-are time consuming and sometimes difficult to implement. In this paper, we introduce a new vector Preisach-type model that uses micromagnetic results to simulate the magnetic response of a system of several tens of thousands of pseudo-particles. The model has a modular structure that allows easy implementation for parallel computing.

  1. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  2. Computational aspects of sensitivity calculations in linear transient structural analysis

    Science.gov (United States)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  3. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and the

  4. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and the cell

  5. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and the cell

  6. Computer Modelling of Chromosome Territories

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    1999-01-01

    textabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown. However, the regulation of genes - their transcription and replication - has been shown to be closely connected to the three-dimensional organization of the genome and

  7. Computational and Modeling Strategies for Cell Motility

    Science.gov (United States)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  8. Computational models for analyzing lipoprotein profiles

    NARCIS (Netherlands)

    Graaf, A.A. de; Schalkwijk, D.B. van

    2011-01-01

    At present, several measurement technologies are available for generating highly detailed concentration-size profiles of lipoproteins, offering increased diagnostic potential. Computational models are useful in aiding the interpretation of these complex datasets and making the data more accessible f

  9. Informing mechanistic toxicology with computational molecular models.

    Science.gov (United States)

    Goldsmith, Michael R; Peterson, Shane D; Chang, Daniel T; Transue, Thomas R; Tornero-Velez, Rogelio; Tan, Yu-Mei; Dary, Curtis C

    2012-01-01

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo efforts. From a molecular biophysical ansatz, we describe how 3D molecular modeling methods used to numerically evaluate the classical pair-wise potential at the chemical/biological interface can inform mechanism of action and the dose-response paradigm of modern toxicology. With an emphasis on molecular docking, 3D-QSAR and pharmacophore/toxicophore approaches, we demonstrate how these methods can be integrated with chemoinformatic and toxicogenomic efforts into a tiered computational toxicology workflow. We describe generalized protocols in which 3D computational molecular modeling is used to enhance our ability to predict and model the most relevant toxicokinetic, metabolic, and molecular toxicological endpoints, thereby accelerating the computational toxicology-driven basis of modern risk assessment while providing a starting point for rational sustainable molecular design.

  10. Computational fluid dynamics modeling in yarn engineering

    CSIR Research Space (South Africa)

    Patanaik, A

    2011-07-01

    Full Text Available This chapter deals with the application of computational fluid dynamics (CFD) modeling in reducing yarn hairiness during the ring spinning process and thereby “engineering” yarn with desired properties. Hairiness significantly affects the appearance...

  11. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  12. A new epidemic model of computer viruses

    Science.gov (United States)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  13. Computer Model Locates Environmental Hazards

    Science.gov (United States)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  14. A mechanism for the cortical computation of hierarchical linguistic structure.

    Science.gov (United States)

    Martin, Andrea E; Doumas, Leonidas A A

    2017-03-01

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose-learning relational reasoning-processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and "cortical" oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism-using time to encode information across a layered network-generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation.

  15. A mechanism for the cortical computation of hierarchical linguistic structure

    Science.gov (United States)

    Doumas, Leonidas A. A.

    2017-01-01

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation. PMID:28253256

  16. Parallel algorithms and archtectures for computational structural mechanics

    Science.gov (United States)

    Patrick, Merrell; Ma, Shing; Mahajan, Umesh

    1989-01-01

    The determination of the fundamental (lowest) natural vibration frequencies and associated mode shapes is a key step used to uncover and correct potential failures or problem areas in most complex structures. However, the computation time taken by finite element codes to evaluate these natural frequencies is significant, often the most computationally intensive part of structural analysis calculations. There is continuing need to reduce this computation time. This study addresses this need by developing methods for parallel computation.

  17. Parallel computing in atmospheric chemistry models

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D. [Lawrence Livermore National Lab., CA (United States). Atmospheric Sciences Div.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  18. Proceedings Fifth Workshop on Developments in Computational Models--Computational Models From Nature

    CERN Document Server

    Cooper, S Barry; 10.4204/EPTCS.9

    2009-01-01

    The special theme of DCM 2009, co-located with ICALP 2009, concerned Computational Models From Nature, with a particular emphasis on computational models derived from physics and biology. The intention was to bring together different approaches - in a community with a strong foundational background as proffered by the ICALP attendees - to create inspirational cross-boundary exchanges, and to lead to innovative further research. Specifically DCM 2009 sought contributions in quantum computation and information, probabilistic models, chemical, biological and bio-inspired ones, including spatial models, growth models and models of self-assembly. Contributions putting to the test logical or algorithmic aspects of computing (e.g., continuous computing with dynamical systems, or solid state computing models) were also very much welcomed.

  19. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Garimella, Rao Veerabhadra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-31

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  20. Computer-Aided Design of RNA Origami Structures.

    Science.gov (United States)

    Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S

    2017-01-01

    RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.

  1. Computer Modeling of Direct Metal Laser Sintering

    Science.gov (United States)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  2. Computational Modeling of Culture's Consequences

    NARCIS (Netherlands)

    Hofstede, G.J.; Jonker, C.M.; Verwaart, T.

    2010-01-01

    This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture, resulti

  3. Computational aspects of premixing modelling

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.

    1998-01-01

    In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)

  4. Visual and Computational Modelling of Minority Games

    Directory of Open Access Journals (Sweden)

    Robertas Damaševičius

    2017-02-01

    Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.

  5. Computational structural mechanics methods research using an evolving framework

    Science.gov (United States)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented......, focusing on universality of the ac response in the extreme disorder limit. Finally, some important unsolved problems relating to hopping models for ac conduction are listed....

  8. Structure of the dimeric N-glycosylated form of fungal β-N-acetylhexosaminidase revealed by computer modeling, vibrational spectroscopy, and biochemical studies

    Directory of Open Access Journals (Sweden)

    Sklenář Jan

    2007-05-01

    Full Text Available Abstract Background Fungal β-N-acetylhexosaminidases catalyze the hydrolysis of chitobiose into its constituent monosaccharides. These enzymes are physiologically important during the life cycle of the fungus for the formation of septa, germ tubes and fruit-bodies. Crystal structures are known for two monomeric bacterial enzymes and the dimeric human lysosomal β-N-acetylhexosaminidase. The fungal β-N-acetylhexosaminidases are robust enzymes commonly used in chemoenzymatic syntheses of oligosaccharides. The enzyme from Aspergillus oryzae was purified and its sequence was determined. Results The complete primary structure of the fungal β-N-acetylhexosaminidase from Aspergillus oryzae CCF1066 was used to construct molecular models of the catalytic subunit of the enzyme, the enzyme dimer, and the N-glycosylated dimer. Experimental data were obtained from infrared and Raman spectroscopy, and biochemical studies of the native and deglycosylated enzyme, and are in good agreement with the models. Enzyme deglycosylated under native conditions displays identical kinetic parameters but is significantly less stable in acidic conditions, consistent with model predictions. The molecular model of the deglycosylated enzyme was solvated and a molecular dynamics simulation was run over 20 ns. The molecular model is able to bind the natural substrate – chitobiose with a stable value of binding energy during the molecular dynamics simulation. Conclusion Whereas the intracellular bacterial β-N-acetylhexosaminidases are monomeric, the extracellular secreted enzymes of fungi and humans occur as dimers. Dimerization of the fungal β-N-acetylhexosaminidase appears to be a reversible process that is strictly pH dependent. Oligosaccharide moieties may also participate in the dimerization process that might represent a unique feature of the exclusively extracellular enzymes. Deglycosylation had only limited effect on enzyme activity, but it significantly affected

  9. Computational challenges in modeling and simulating living matter

    Science.gov (United States)

    Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling

    2016-12-01

    Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.

  10. Building a Structural Model: Parameterization and Structurality

    Directory of Open Access Journals (Sweden)

    Michel Mouchart

    2016-04-01

    Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.

  11. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  12. Mechanistic models in computational social science

    Science.gov (United States)

    Holme, Petter; Liljeros, Fredrik

    2015-09-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  13. Mechanistic Models in Computational Social Science

    CERN Document Server

    Holme, Petter

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emerging phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  14. Computational modeling of failure in composite laminates

    NARCIS (Netherlands)

    Van der Meer, F.P.

    2010-01-01

    There is no state of the art computational model that is good enough for predictive simulation of the complete failure process in laminates. Already on the single ply level controversy exists. Much work has been done in recent years in the development of continuum models, but these fail to predict t

  15. Computational Intelligence. Mortality Models for the Actuary

    NARCIS (Netherlands)

    Willemse, W.J.

    2001-01-01

    This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's la

  16. Generating computational models for serious gaming

    NARCIS (Netherlands)

    Westera, Wim

    2014-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  17. Assessment of weld thickness loss in offshore pipelines using computed radiography and computational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste - UEZO, Avenida Manuel Caldeira de Alvarenga, 1203, 23070-200, Rio de Janeiro, RJ (Brazil)], E-mail: scorrea@con.ufrj.br; Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Departamento de Geologia/IGEO, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Oliveira, D.F. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X. [PEN/COPPE-DNC/Poli-CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear, COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Marinho, C.; Camerini, C.S. [CENPES/PDEP/TMEC/PETROBRAS, Ilha do Fundao, Cidade Universitaria, 21949-900, Rio de Janeiro, RJ (Brazil)

    2009-10-15

    In order to guarantee the structural integrity of oil plants it is crucial to monitor the amount of weld thickness loss in offshore pipelines. However, in spite of its relevance, this parameter is very difficult to determine, due to both the large diameter of most pipes and the complexity of the multi-variable system involved. In this study, a computational modeling based on Monte Carlo MCNPX code is combined with computed radiography to estimate the weld thickness loss in large-diameter offshore pipelines. Results show that computational modeling is a powerful tool to estimate intensity variations in radiographic images generated by weld thickness variations, and it can be combined with computed radiography to assess weld thickness loss in offshore and subsea pipelines.

  18. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  19. Do's and Don'ts of Computer Models for Planning

    Science.gov (United States)

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  20. Parallel Computing of Ocean General Circulation Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper discusses the parallel computing of the thirdgeneration Ocea n General Circulation Model (OGCM) from the State Key Laboratory of Numerical Mo deling for Atmospheric Science and Geophysical Fluid Dynamics(LASG),Institute of Atmosphere Physics(IAP). Meanwhile, several optimization strategies for paralle l computing of OGCM (POGCM) on Scalable Shared Memory Multiprocessor (S2MP) are presented. Using Message Passing Interface (MPI), we obtain super linear speedup on SGI Origin 2000 for parallel OGCM(POGCM) after optimization.

  1. On the completeness of quantum computation models

    CERN Document Server

    Arrighi, Pablo

    2010-01-01

    The notion of computability is stable (i.e. independent of the choice of an indexing) over infinite-dimensional vector spaces provided they have a finite "tensorial dimension". Such vector spaces with a finite tensorial dimension permit to define an absolute notion of completeness for quantum computation models and give a precise meaning to the Church-Turing thesis in the framework of quantum theory. (Extra keywords: quantum programming languages, denotational semantics, universality.)

  2. Security Management Model in Cloud Computing Environment

    OpenAIRE

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  3. Finite difference computing with exponential decay models

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .

  4. A computational model of analogical reasoning

    Institute of Scientific and Technical Information of China (English)

    李波; 赵沁平

    1997-01-01

    A computational model of analogical reasoning is presented, which divides analogical reasoning process into four subprocesses, i.e. reminding, elaboration, matching and transfer. For each subprocess, its role and the principles it follows are given. The model is discussed in detail, including salient feature-based reminding, relevance-directed elaboration, an improved matching model and a transfer model. And the advantages of this model are summarized based on the results of BHARS, which is an analogical reasoning system implemented by this model.

  5. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  6. Computational Electronic Structure of Antiferromagnetic Centers in Metalloproteins.

    Science.gov (United States)

    Rodriguez, Jorge H.

    2003-03-01

    Nature uses the properties of transition metal ions to carry out a variety of functions associated with vital life processes such as respiration and the transport of oxygen. Oxo-bridged diiron centers are intriguing structural motifs which are present in dioxygen transporting proteins and display antiferromagnetic ordering. We have performed a comprehensive study of the electronic structure and magnetic properties of structurally characterized models for diiron-oxo proteins. Results from Kohn-Sham density functional theory show that the models are antiferromagnetically coupled in agreement with experiment. The physical origin of the spin coupling has been elucidated as the main superexchange pathways responsible for magnetic ordering have been identified. In addition, the exchange constants that parameterize the Heisenberg Hamiltonian, H=JS_1.S_2, have been predicted in excellent agreement with experiment. Our results are important for establishing correlations between electronic structure and biomolecular function and show that computational electronic structure can be used as a powerful tool for the investigation of biomolecular magnetism.

  7. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  8. Computational Models of Relational Processes in Cognitive Development

    Science.gov (United States)

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  9. Computational Models of Relational Processes in Cognitive Development

    Science.gov (United States)

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  10. Challenges in structural approaches to cell modeling.

    Science.gov (United States)

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  12. On the computational modeling of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    This work deals with the computational modeling and numerical simulation of Friction Stir Welding (FSW) processes. Here a quasi-static, transient, mixed stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermoplastic material models have been considered. A product formula algorithm, leading to a staggered solution scheme, has been used. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are c...

  13. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  14. An improved computational constitutive model for glass

    Science.gov (United States)

    Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.

    2017-01-01

    In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  15. Structural biology computing: Lessons for the biomedical research sciences.

    Science.gov (United States)

    Morin, Andrew; Sliz, Piotr

    2013-11-01

    The field of structural biology, whose aim is to elucidate the molecular and atomic structures of biological macromolecules, has long been at the forefront of biomedical sciences in adopting and developing computational research methods. Operating at the intersection between biophysics, biochemistry, and molecular biology, structural biology's growth into a foundational framework on which many concepts and findings of molecular biology are interpreted1 has depended largely on parallel advancements in computational tools and techniques. Without these computing advances, modern structural biology would likely have remained an exclusive pursuit practiced by few, and not become the widely practiced, foundational field it is today. As other areas of biomedical research increasingly embrace research computing techniques, the successes, failures and lessons of structural biology computing can serve as a useful guide to progress in other biomedically related research fields.

  16. Structural dynamic modifications via models

    Indian Academy of Sciences (India)

    T K Kundra

    2000-06-01

    Structural dynamic modification techniques attempt to reduce dynamic design time and can be implemented beginning with spatial models of structures, dynamic test data or updated models. The models assumed in this discussion are mathematical models, namely mass, stiffness, and damping matrices of the equations of motion of a structure. These models are identified/extracted from dynamic test data viz. frequency response functions (FRFs). Alternatively these models could have been obtained by adjusting or updating the finite element model of the structure in the light of the test data. The methods of structural modification for getting desired dynamic characteristics by using modifiers namely mass, beams and tuned absorbers are discussed.

  17. Computer Aided Modelling – Opportunities and Challenges

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    -based solutions to significant problems? The important issues of workflow and data flow are discussed together with fit-for-purpose model development. As well, the lack of tools around multiscale modelling provides opportunities for the development of efficient tools to address such challenges. The ability......This chapter considers the opportunities that are present in developing, extending and applying aspects of computer-aided modelling principles and practice. What are the best tasks to be done by modellers and what needs the application of CAPE tools? How do we efficiently develop model...... and opportunities are discussed for such systems....

  18. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  19. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  20. Analisis Model Manajemen Insiden Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anggi Sukamto

    2015-05-01

    Full Text Available Dukungan teknologi informasi yang diterapkan oleh organisasi membutuhkan suatu manajemen agar penggunaannya dapat memenuhi tujuan penerapan teknologi tersebut. Salah satu kerangka kerja manajemen layanan teknologi informasi yang dapat diadopsi oleh organisasi adalah Information Technology Infrastructure Library (ITIL. Dukungan layanan (service support merupakan bagian dari proses ITIL. Pada umumnya, aktivitas dukungan layanan dilaksanakan dengan penggunaan teknologi yang dapat diakses melalui internet. Kondisi tersebut mengarah pada suatu konsep cloud computing. Cloud computing memungkinkan suatu instansi atau perusahaan untuk bisa mengatur sumber daya melalui jaringan internet. Fokus penelitian ini adalah menganalisis proses dan pelaku yang terlibat dalam dukungan layanan khususnya pada proses manajemen insiden, serta mengidentifikasi potensi penyerahan pelaku ke bentuk layanan cloud computing. Berdasarkan analisis yang dilakukan maka usulan model manajemen insiden berbasis cloud ini dapat diterapkan dalam suatu organisasi yang telah menggunakan teknologi komputer untuk mendukung kegiatan operasional. Kata Kunci—Cloud computing, ITIL, Manajemen Insiden, Service Support, Service Desk.

  1. Mathematical modelling in the computer-aided process planning

    Science.gov (United States)

    Mitin, S.; Bochkarev, P.

    2016-04-01

    This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.

  2. Smooth structures on Eschenburg spaces: numerical computations

    CERN Document Server

    Butler, Leo T

    2009-01-01

    This paper numerically computes the topological and smooth invariants of Eschenburg spaces with small fourth cohomology group, following Kruggel's determination of the Kreck-Stolz invariants of Eschenburg spaces that satisfy condition C. The GNU GMP arbitrary-precision library is utilised.

  3. Integrated materials–structural models

    DEFF Research Database (Denmark)

    Stang, Henrik; Geiker, Mette Rica

    2008-01-01

    Reliable service life models for load carrying structures are significant elements in the evaluation of the performance and sustainability of existing and new structures. Furthermore, reliable service life models are prerequisites for the evaluation of the sustainability of maintenance strategies......, repair works and strengthening methods for structures. A very significant part of the infrastructure consists of reinforced concrete structures. Even though reinforced concrete structures typically are very competitive, certain concrete structures suffer from various types of degradation. A framework...

  4. Reliable structural, thermodynamic, and spectroscopic properties of organic molecules adsorbed on silicon surfaces from computational modeling: the case of glycine@Si(100).

    Science.gov (United States)

    Carnimeo, Ivan; Biczysko, Malgorzata; Bloino, Julien; Barone, Vincenzo

    2011-10-06

    Chemisorption of glycine on Si(100) has been studied by an integrated computational strategy based on perturbative anharmonic computations employing geometries and harmonic force fields evaluated by hybrid density functionals coupled to purposely tailored basis sets. It is shown that such a strategy allows the prediction of spectroscopic properties of isolated and chemisorbed molecules with comparable accuracy, paving the route toward a detailed analysis of surface-induced changes of glycine vibrational spectra.

  5. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  6. Utilizing computer models for optimizing classroom acoustics

    Science.gov (United States)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  7. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Repasi, Endre; Bijl, Piet; Labarre, Luc; Wittenstein, Wolfgang; Bürsing, Helge

    2017-05-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short outlook on validation tests and the future potential of simulation for sensor assessment.

  8. Effects of gastrointestinal tissue structure on computed dipole vectors

    Science.gov (United States)

    Austin, Travis M; Li, Liren; Pullan, Andrew J; Cheng, Leo K

    2007-01-01

    Background Digestive diseases are difficult to assess without using invasive measurements. Non-invasive measurements of body surface electrical and magnetic activity resulting from underlying gastro-intestinal activity are not widely used, in large due to their difficulty in interpretation. Mathematical modelling of the underlying processes may help provide additional information. When modelling myoelectrical activity, it is common for the electrical field to be represented by equivalent dipole sources. The gastrointestinal system is comprised of alternating layers of smooth muscle (SM) cells and Interstitial Cells of Cajal (ICC). In addition the small intestine has regions of high curvature as the intestine bends back upon itself. To eventually use modelling diagnostically, we must improve our understanding of the effect that intestinal structure has on dipole vector behaviour. Methods Normal intestine electrical behaviour was simulated on simple geometries using a monodomain formulation. The myoelectrical fields were then represented by their dipole vectors and an examination on the effect of structure was undertaken. The 3D intestine model was compared to a more computationally efficient 1D representation to determine the differences on the resultant dipole vectors. In addition, the conductivity values and the thickness of the different muscle layers were varied in the 3D model and the effects on the dipole vectors were investigated. Results The dipole vector orientations were largely affected by the curvature and by a transmural gradient in the electrical wavefront caused by the different properties of the SM and ICC layers. This gradient caused the dipoles to be oriented at an angle to the principal direction of electrical propagation. This angle increased when the ratio of the longitudinal and circular muscle was increased or when the the conductivity along and across the layers was increased. The 1D model was able to represent the geometry of the small

  9. Effects of gastrointestinal tissue structure on computed dipole vectors

    Directory of Open Access Journals (Sweden)

    Pullan Andrew J

    2007-10-01

    Full Text Available Abstract Background Digestive diseases are difficult to assess without using invasive measurements. Non-invasive measurements of body surface electrical and magnetic activity resulting from underlying gastro-intestinal activity are not widely used, in large due to their difficulty in interpretation. Mathematical modelling of the underlying processes may help provide additional information. When modelling myoelectrical activity, it is common for the electrical field to be represented by equivalent dipole sources. The gastrointestinal system is comprised of alternating layers of smooth muscle (SM cells and Interstitial Cells of Cajal (ICC. In addition the small intestine has regions of high curvature as the intestine bends back upon itself. To eventually use modelling diagnostically, we must improve our understanding of the effect that intestinal structure has on dipole vector behaviour. Methods Normal intestine electrical behaviour was simulated on simple geometries using a monodomain formulation. The myoelectrical fields were then represented by their dipole vectors and an examination on the effect of structure was undertaken. The 3D intestine model was compared to a more computationally efficient 1D representation to determine the differences on the resultant dipole vectors. In addition, the conductivity values and the thickness of the different muscle layers were varied in the 3D model and the effects on the dipole vectors were investigated. Results The dipole vector orientations were largely affected by the curvature and by a transmural gradient in the electrical wavefront caused by the different properties of the SM and ICC layers. This gradient caused the dipoles to be oriented at an angle to the principal direction of electrical propagation. This angle increased when the ratio of the longitudinal and circular muscle was increased or when the the conductivity along and across the layers was increased. The 1D model was able to represent the

  10. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  11. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  12. Computer modeling of loudspeaker arrays in rooms

    Science.gov (United States)

    Schwenke, Roger

    2002-05-01

    Loudspeakers present a special challenge to computational modeling of rooms. When modeling a collection of noncorrelated sound sources, such as a group of musicians, coarse resolution power spectrum and directivities are sufficient. In contrast, a typical loudspeaker array consists of many speakers driven with the same signal, and are therefore almost completely correlated. This can lead to a quite complicated, but stable, pattern of spatial nulls and lobes which depends sensitively on frequency. It has been shown that, to model these interactions accurately, one must have loudspeaker data with 1 deg spatial resolution, 1/24 octave frequency resolution including phase. It will be shown that computer models at such a high resolution can in fact inform design decisions of loudspeaker arrays.

  13. Computational models for synthetic marine infrared clutter

    Science.gov (United States)

    Constantikes, Kim T.; Zysnarski, Adam H.

    1996-06-01

    The next generation of ship defense missiles will need to engage stealthy, passive, sea-skimming missiles. Detection and guidance will occur against a background of sea surface and horizon which can present significant clutter problems for infrared seekers, particularly when targets are comparatively dim. We need a variety of sea clutter models: statistical image models for signal processing algorithm design, clutter occurrence models for systems effectiveness assessment, and constructive image models for synthesizing very large field-of-view (FOV) images with high spatial and temporal resolution. We have implemented and tested such a constructive model. First principle models of water waves and light transport provide a computationally intensive clutter model implemented as a raytracer. Our models include sea, sky, and solar radiance; reflectance; attenuating atmospheres; constructive solid geometry targets; target and water wave dynamics; and simple sensor image formation.

  14. Computational modelling for dry-powder inhalers

    NARCIS (Netherlands)

    Kröger, Ralf; Woolhouse, Robert; Becker, Michael; Wachtel, Herbert; de Boer, Anne; Horner, Marc

    2012-01-01

    Computational fluid dynamics (CFD) is a simulation tool used for modelling powder flow through inhalers to allow optimisation both of device design and drug powder. Here, Ralf Kröger, Consulting Senior CFD Engineer, ANSYS Germany GmbH; Marc Horner, Lead Technical Services Engineer, Healthcare, ANSYS

  15. Agent based computational model of trust

    NARCIS (Netherlands)

    A. Gorobets (Alexander); B. Nooteboom (Bart)

    2004-01-01

    textabstractThis paper employs the methodology of Agent-Based Computational Economics (ACE) to investigate under what conditions trust can be viable in markets. The emergence and breakdown of trust is modeled in a context of multiple buyers and suppliers. Agents adapt their trust in a partner, the w

  16. Integer Programming Models for Computational Biology Problems

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lancia

    2004-01-01

    The recent years have seen an impressive increase in the use of Integer Programming models for the solution of optimization problems originating in Molecular Biology. In this survey, some of the most successful Integer Programming approaches are described, while a broad overview of application areas being is given in modern Computational Molecular Biology.

  17. Computational modelling for dry-powder inhalers

    NARCIS (Netherlands)

    Kröger, Ralf; Woolhouse, Robert; Becker, Michael; Wachtel, Herbert; de Boer, Anne; Horner, Marc

    2012-01-01

    Computational fluid dynamics (CFD) is a simulation tool used for modelling powder flow through inhalers to allow optimisation both of device design and drug powder. Here, Ralf Kröger, Consulting Senior CFD Engineer, ANSYS Germany GmbH; Marc Horner, Lead Technical Services Engineer, Healthcare,

  18. A Stochastic Dynamic Model of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2012-01-01

    Full Text Available A stochastic computer virus spread model is proposed and its dynamic behavior is fully investigated. Specifically, we prove the existence and uniqueness of positive solutions, and the stability of the virus-free equilibrium and viral equilibrium by constructing Lyapunov functions and applying Ito's formula. Some numerical simulations are finally given to illustrate our main results.

  19. STEW A Nonlinear Data Modeling Computer Program

    CERN Document Server

    Chen, H

    2000-01-01

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross sections. This report presents results of the modeling of the sup 2 sup 3 sup 9 Pu(n,f) and sup 2 sup 3 sup 5 U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  20. STEW: A Nonlinear Data Modeling Computer Program

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.

    2000-03-04

    A nonlinear data modeling computer program, STEW, employing the Levenberg-Marquardt algorithm, has been developed to model the experimental {sup 239}Pu(n,f) and {sup 235}U(n,f) cross sections. This report presents results of the modeling of the {sup 239}Pu(n,f) and {sup 235}U(n,f) cross-section data. The calculation of the fission transmission coefficient is based on the double-humped-fission-barrier model of Bjornholm and Lynn. Incident neutron energies of up to 5 MeV are considered.

  1. A Simple Gauss-Newton Procedure for Covariance Structure Analysis with High-Level Computer Languages.

    Science.gov (United States)

    Cudeck, Robert; And Others

    1993-01-01

    An implementation of the Gauss-Newton algorithm for the analysis of covariance structure that is specifically adapted for high-level computer languages is reviewed. This simple method for estimating structural equation models is useful for a variety of standard models, as is illustrated. (SLD)

  2. Evaluating computational models of cholesterol metabolism.

    Science.gov (United States)

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  3. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  4. Mechanistic models in computational social science

    OpenAIRE

    Petter eHolme; Fredrik eLiljeros

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influenc...

  5. Mechanistic models in computational social science

    OpenAIRE

    Holme, Petter; Liljeros, Fredrik

    2015-01-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influ...

  6. Computer Use and the Wage Structure in Austria

    OpenAIRE

    Hofer, Helmut; Riedel, Monika

    2003-01-01

    Abstract: In this paper we examine the relationship between computer premium and job position in Austria. We estimate cross-section wage equations and control for selectivity of computer use via a treatment effects model. We find that the size of the wage effect attributed to computer use varies significantly between job hierarchies. Persons in higher positions receive relatively lower rewards for computer use than workers at lower hierarchy levels. Overall we find that computerisation increa...

  7. Linear static structural and vibration analysis on high-performance computers

    Science.gov (United States)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  8. 3D-DART: a DNA structure modelling server

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2009-01-01

    There is a growing interest in structural studies of DNA by both experimental and computational approaches. Often, 3D-structural models of DNA are required, for instance, to serve as templates for homology modeling, as starting structures for macro-molecular docking or as scaffold for NMR structure

  9. Structural Composites Corrosive Management by Computational Simulation

    Science.gov (United States)

    Chamis, Christos C.; Minnetyan, Levon

    2006-01-01

    A simulation of corrosive management on polymer composites durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured Ph factor and is represented by voids, temperature, and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure, and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply managed degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.

  10. Approximation method to compute domain related integrals in structural studies

    Science.gov (United States)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the

  11. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    Directory of Open Access Journals (Sweden)

    Zdenko Lončarić

    2009-12-01

    Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce

  12. A first course in structural equation modeling

    CERN Document Server

    Raykov, Tenko

    2012-01-01

    In this book, authors Tenko Raykov and George A. Marcoulides introduce students to the basics of structural equation modeling (SEM) through a conceptual, nonmathematical approach. For ease of understanding, the few mathematical formulas presented are used in a conceptual or illustrative nature, rather than a computational one.Featuring examples from EQS, LISREL, and Mplus, A First Course in Structural Equation Modeling is an excellent beginner's guide to learning how to set up input files to fit the most commonly used types of structural equation models with these programs. The basic ideas and methods for conducting SEM are independent of any particular software.Highlights of the Second Edition include: Review of latent change (growth) analysis models at an introductory level Coverage of the popular Mplus program Updated examples of LISREL and EQS A CD that contains all of the text's LISREL, EQS, and Mplus examples.A First Course in Structural Equation Modeling is intended as an introductory book for students...

  13. Synthesis of computational structures for analog signal processing

    CERN Document Server

    Popa, Cosmin Radu

    2011-01-01

    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  14. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhar [Univ. of Pittsburgh, PA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.

  15. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhur [Univ. of Pittsburgh, PA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.

  16. Linking Experimental Characterization and Computational Modeling in Microstructural Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, Melik Cumhur [Univ. of California, Berkeley, CA (United States)

    2002-06-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity

  17. Processor core model for quantum computing.

    Science.gov (United States)

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-09

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  18. Computer Model Of Fragmentation Of Atomic Nuclei

    Science.gov (United States)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  19. Mechanistic models in computational social science

    Directory of Open Access Journals (Sweden)

    Petter eHolme

    2015-09-01

    Full Text Available Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models, to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  20. Computational modelling of evolution: ecosystems and language

    CERN Document Server

    Lipowski, Adam

    2008-01-01

    Recently, computational modelling became a very important research tool that enables us to study problems that for decades evaded scientific analysis. Evolutionary systems are certainly examples of such problems: they are composed of many units that might reproduce, diffuse, mutate, die, or in some cases for example communicate. These processes might be of some adaptive value, they influence each other and occur on various time scales. That is why such systems are so difficult to study. In this paper we briefly review some computational approaches, as well as our contributions, to the evolution of ecosystems and language. We start from Lotka-Volterra equations and the modelling of simple two-species prey-predator systems. Such systems are canonical example for studying oscillatory behaviour in competitive populations. Then we describe various approaches to study long-term evolution of multi-species ecosystems. We emphasize the need to use models that take into account both ecological and evolutionary processe...

  1. Queuing theory models for computer networks

    Science.gov (United States)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  2. Computer Aided Design Modeling for Heterogeneous Objects

    CERN Document Server

    Gupta, Vikas; Tandon, Puneet

    2010-01-01

    Heterogeneous object design is an active research area in recent years. The conventional CAD modeling approaches only provide geometry and topology of the object, but do not contain any information with regard to the materials of the object and so can not be used for the fabrication of heterogeneous objects (HO) through rapid prototyping. Current research focuses on computer-aided design issues in heterogeneous object design. A new CAD modeling approach is proposed to integrate the material information into geometric regions thus model the material distributions in the heterogeneous object. The gradient references are used to represent the complex geometry heterogeneous objects which have simultaneous geometry intricacies and accurate material distributions. The gradient references helps in flexible manipulability and control to heterogeneous objects, which guarantees the local control over gradient regions of developed heterogeneous objects. A systematic approach on data flow, processing, computer visualizat...

  3. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  4. Computational Modeling of Vortex Generators for Turbomachinery

    Science.gov (United States)

    Chima, R. V.

    2002-01-01

    In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.

  5. Secure data structures based on multi-party computation

    DEFF Research Database (Denmark)

    Toft, Tomas

    2011-01-01

    This work considers data structures based on multi-party computation (MPC) primitives: structuring secret (e.g. secret shared and potentially unknown) data such that it can both be queried and updated efficiently. Implementing an oblivious RAM (ORAM) using MPC allows any existing data structure...

  6. Sticker DNA computer model--PartⅡ:Application

    Institute of Scientific and Technical Information of China (English)

    XU Jin; LI Sanping; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore, it arouses attention and interest of scientists in many fields. In this paper, we extend and improve the sticker model, which will be definitely beneficial to the construction of DNA computer. This paper is the second part of our series paper, which mainly focuses on the application of sticker model. It mainly consists of the following three sections: the matrix representation of sticker model is first presented; then a brief review of the past research on graph and combinatorial optimization, such as the minimal set covering problem, the vertex covering problem, Hamiltonian path or cycle problem, the maximal clique problem, the maximal independent problem and the Steiner spanning tree problem, is described; Finally a DNA algorithm for the graph isomorphic problem based on the sticker model is given.

  7. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  8. Structural Computer Code Evaluation. Volume I

    Science.gov (United States)

    1976-11-01

    Rivlin model for large strains. Other exanmples are given in Reference 5. Hypoelasticity A hypoelastic material is one in which the components of...remains is the application of these codes to specific rocket nozzle problems and the evaluation of their capabilities to model modern nozzle mraterial...behavior. Further work may also require the development of appropriate material property data or new material models to adequately characterize these

  9. Effect of Material Ion Exchanges on the Mechanical Stiffness Properties and Shear Deformation of Hydrated Cement Material Chemistry Structure C-S-H Jennit - A Computational Modeling Study

    Science.gov (United States)

    2014-01-01

    performance. Journal of Advanced Concrete Technology , 2003. 1(2): p. 91-126. 11. Martín-Sedeño, M.C., et al., Aluminum-rich belite sulfoaluminate cements...85 33. Bussi , G. and M. Parrinello, Stochastic thermostats: comparison of local and global schemes. Computer Physics Communications, 2008. 179

  10. Networked Computing in Wireless Sensor Networks for Structural Health Monitoring

    CERN Document Server

    Jindal, Apoorva

    2010-01-01

    This paper studies the problem of distributed computation over a network of wireless sensors. While this problem applies to many emerging applications, to keep our discussion concrete we will focus on sensor networks used for structural health monitoring. Within this context, the heaviest computation is to determine the singular value decomposition (SVD) to extract mode shapes (eigenvectors) of a structure. Compared to collecting raw vibration data and performing SVD at a central location, computing SVD within the network can result in significantly lower energy consumption and delay. Using recent results on decomposing SVD, a well-known centralized operation, into components, we seek to determine a near-optimal communication structure that enables the distribution of this computation and the reassembly of the final results, with the objective of minimizing energy consumption subject to a computational delay constraint. We show that this reduces to a generalized clustering problem; a cluster forms a unit on w...

  11. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  12. Computing the complexity for Schelling segregation models

    Science.gov (United States)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  13. Computational Study of a Primitive Life Model

    Science.gov (United States)

    Andrecut, Mircea

    We present a computational study of a primitive life model. The calculation involves a discrete treatment of a partial differential equation and some details of that problems are explained. We show that the investigated model is equivalent to a diffusively coupled logistic lattice. The bifurcation diagrams were calculated for different values of the control parameters. The obtained diagrams have shown that the time dependence of the population of the investigated model exhibits transitions between ordered and chaotic behavior. We have investigated also the patterns formation in this system.

  14. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  15. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  16. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  17. Computational Modeling of Pollution Transmission in Rivers

    Science.gov (United States)

    Parsaie, Abbas; Haghiabi, Amir Hamzeh

    2017-06-01

    Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.

  18. [Computational chemistry in structure-based drug design].

    Science.gov (United States)

    Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu

    2013-07-01

    Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.

  19. Structural Post-optimisation of a computationally designed Plywood Gridshell

    DEFF Research Database (Denmark)

    Lafuente Hernández, Elisa; Tamke, Martin; Gengnagel, Christoph

    2012-01-01

    Computational design is being commonly used for the exploration of new geometries and systemsin architecture. Complex parametric definitions allow not only spatial shaping but also theintegration of material simulation and afterwards robotic fabrication. Nevertheless a structurally-efficient desi...

  20. Interactive computer graphics and its role in control system design of large space structures

    Science.gov (United States)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  1. Computer Modelling and Simulation for Inventory Control

    Directory of Open Access Journals (Sweden)

    G.K. Adegoke

    2012-07-01

    Full Text Available This study concerns the role of computer simulation as a device for conducting scientific experiments on inventory control. The stores function utilizes a bulk of physical assets and engages a bulk of financial resources in a manufacturing outfit therefore there is a need for an efficient inventory control. The reason being that inventory control reduces cost of production and thereby facilitates the effective and efficient accomplishment of production objectives of an organization. Some mathematical and statistical models were used to compute the Economic Order Quantity (EOQ. Test data were gotten from a manufacturing company and same were simulated. The results generated were used to predict a real life situation and have been presented and discussed. The language of implementation for the three models is Turbo Pascal due to its capability, generality and flexibility as a scientific programming language.

  2. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  3. A Neural Computational Model of Incentive Salience

    OpenAIRE

    Jun Zhang; Berridge, Kent C; Amy J Tindell; Kyle S Smith; J Wayne Aldridge

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire ca...

  4. AMAR: A Computational Model of Autosegmental Phonology

    Science.gov (United States)

    1993-10-01

    the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t

  5. Computational Biology: Modeling Chronic Renal Allograft Injury.

    Science.gov (United States)

    Stegall, Mark D; Borrows, Richard

    2015-01-01

    New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury.

  6. Computing Lagrangian coherent structures from their variational theory.

    Science.gov (United States)

    Farazmand, Mohammad; Haller, George

    2012-03-01

    Using the recently developed variational theory of hyperbolic Lagrangian coherent structures (LCSs), we introduce a computational approach that renders attracting and repelling LCSs as smooth, parametrized curves in two-dimensional flows. The curves are obtained as trajectories of an autonomous ordinary differential equation for the tensor lines of the Cauchy-Green strain tensor. This approach eliminates false positives and negatives in LCS detection by separating true exponential stretching from shear in a frame-independent fashion. Having an explicitly parametrized form for hyperbolic LCSs also allows for their further in-depth analysis and accurate advection as material lines. We illustrate these results on a kinematic model flow and on a direct numerical simulation of two-dimensional turbulence.

  7. Computational fluid dynamics modelling in cardiovascular medicine.

    Science.gov (United States)

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  8. Cartographic Modeling: Computer-assisted Analysis of Spatially Defined Neighborhoods

    Science.gov (United States)

    Berry, J. K.; Tomlin, C. D.

    1982-01-01

    Cartographic models addressing a wide variety of applications are composed of fundamental map processing operations. These primitive operations are neither data base nor application-specific. By organizing the set of operations into a mathematical-like structure, the basis for a generalized cartographic modeling framework can be developed. Among the major classes of primitive operations are those associated with reclassifying map categories, overlaying maps, determining distance and connectivity, and characterizing cartographic neighborhoods. The conceptual framework of cartographic modeling is established and techniques for characterizing neighborhoods are used as a means of demonstrating some of the more sophisticated procedures of computer-assisted map analysis. A cartographic model for assessing effective roundwood supply is briefly described as an example of a computer analysis. Most of the techniques described have been implemented as part of the map analysis package developed at the Yale School of Forestry and Environmental Studies.

  9. Track structure modelling for ion radiotherapy

    CERN Document Server

    Korcyl, Marta

    2014-01-01

    In its broadest terms, doctoral dissertation entitled "Track structure modelling for ion radiotherapy" is part of the supporting research background in the development of the ambitious proton radiotherapy project currently under way at the Institute of Nuclear Physics PAN in Krak\\'ow. Another broad motivation was the desire to become directly involved in research on a topical and challenging subject of possibly developing a therapy planning system for carbon beam radiotherapy, based in its radiobiological part on the Track Structure model developed by prof. Robert Katz over 50 years ago. Thus, the general aim of this work was, firstly, to recapitulate the Track Structure model and to propose an updated and complete formulation of this model by incorporating advances made by several authors who had contributed to its development in the past. Secondly, the updated and amended (if necessary) formulation of the model was presented in a form applicable for use in computer codes which would constitute the "radiobio...

  10. Inverse computational feedback optimization imaging applied to time varying changes in a homogeneous structure.

    Science.gov (United States)

    Evans, Daniel J; Manwaring, Mark L; Soule, Terence

    2008-01-01

    The technique of inverse computational feedback optimization imaging allows for the imaging of varying tissue without the continuous need of a complex imaging systems such as an MRI or CT. Our method trades complex imaging equipment for computing power. The objective is to use a baseline scan from an imaging system along with finite element method computational software to calculate the physically measurable parameters (such as voltage or temperature). As the physically measurable parameters change the computational model is iteratively run until it matches the measured values. Optimization routines are implemented to accelerate the process of finding the new values. Presented is a computational model demonstrating how the inverse imaging technique would work with a simple homogeneous sample with a circular structure. It demonstrates the ability to locate an object with only a few point measurements. The presented computational model uses swarm optimization techniques to help find the object location from the measured data (which in this case is voltage).

  11. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  12. Current Computational Challenges for CMC Processes, Properties, and Structures

    Science.gov (United States)

    DiCarlo, James

    2008-01-01

    environment. To put these computational issues in perspective, the various modeling needs within these three areas are briefly discussed in terms of their technical importance and their key controlling mechanistic factors as we know them today. Emphasis is placed primarily on the SiC/SiC ceramic composite system because of its higher temperature capability and enhanced development within the CMC industry. A brief summary is then presented concerning on-going property studies aimed at addressing these CMC modeling needs within NASA in terms of their computational approaches and recent important results. Finally an overview perspective is presented on those key areas where further CMC computational studies are needed today to enhance the viability of CMC structural components for high-temperature applications.

  13. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  14. An information criterion for marginal structural models.

    Science.gov (United States)

    Platt, Robert W; Brookhart, M Alan; Cole, Stephen R; Westreich, Daniel; Schisterman, Enrique F

    2013-04-15

    Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.

  15. Computational acoustic modeling of cetacean vocalizations

    Science.gov (United States)

    Gurevich, Michael Dixon

    A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it

  16. A demonstrative model of a lunar base simulation on a personal computer

    Science.gov (United States)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  17. Mechanical Modelling and Computational Issues in Civil Engineering

    OpenAIRE

    2005-01-01

    In this edited book various novel approaches to problems of modern civil engineering are demonstrated. Experts associated within the Lagrange Laboratory present recent research results in civil engineering dealing both with modelling and computational aspects. Many modern topics are covered, such as monumental dams, soil mechanics and geotechnics, granular media, contact and friction problems, damage and fracture, new structural materials, and vibration damping -presenting the state of the ar...

  18. Computer-Aided Process Model For Carbon/Phenolic Materials

    Science.gov (United States)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  19. On Modeling the Instructional Content in Computer Assisted Education

    Directory of Open Access Journals (Sweden)

    Emilia PECHEANU

    2009-12-01

    Full Text Available This paper presents a solution for conceptually modeling the instructionalcontent in computer-assisted education. The different cognitive style of learnersimposes different modalities of presenting and structuring the information (thepedagogical knowledge to be taught. Conceptual organization of the training domainknowledge, with learning stages phasing, can constitute a better solution to the problemof adapting the instructional system interaction to users with different cognitive styleand needs.

  20. Computational Design Modelling : Proceedings of the Design Modelling Symposium

    CERN Document Server

    Kilian, Axel; Palz, Norbert; Scheurer, Fabian

    2012-01-01

    This book publishes the peer-reviewed proceeding of the third Design Modeling Symposium Berlin . The conference constitutes a platform for dialogue on experimental practice and research within the field of computationally informed architectural design. More than 60 leading experts the computational processes within the field of computationally informed architectural design to develop a broader and less exotic building practice that bears more subtle but powerful traces of the complex tool set and approaches we have developed and studied over recent years. The outcome are new strategies for a reasonable and innovative implementation of digital potential in truly innovative and radical design guided by both responsibility towards processes and the consequences they initiate.

  1. Comparative Protein Structure Modeling Using MODELLER.

    Science.gov (United States)

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  2. Interlanguages and synchronic models of computation

    CERN Document Server

    Berka, Alexander Victor

    2010-01-01

    A novel language system has given rise to promising alternatives to standard formal and processor network models of computation. An interstring linked with a abstract machine environment, shares sub-expressions, transfers data, and spatially allocates resources for the parallel evaluation of dataflow. Formal models called the a-Ram family are introduced, designed to support interstring programming languages (interlanguages). Distinct from dataflow, graph rewriting, and FPGA models, a-Ram instructions are bit level and execute in situ. They support sequential and parallel languages without the space/time overheads associated with the Turing Machine and l-calculus, enabling massive programs to be simulated. The devices of one a-Ram model, called the Synchronic A-Ram, are fully connected and simpler than FPGA LUT's. A compiler for an interlanguage called Space, has been developed for the Synchronic A-Ram. Space is MIMD. strictly typed, and deterministic. Barring memory allocation and compilation, modules are ref...

  3. A Neural Computational Model of Incentive Salience

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C.; Tindell, Amy J.; Smith, Kyle S.; Aldridge, J. Wayne

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered ‘wanting’ only by

  4. A neural computational model of incentive salience.

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne

    2009-07-01

    Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating

  5. A neural computational model of incentive salience.

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2009-07-01

    Full Text Available Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues, incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue occurs during certain states, without necessarily requiring (relearning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization. Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by

  6. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  7. Computational Models and Virtual Reality. New Perspectives of Research in Chemistry

    Directory of Open Access Journals (Sweden)

    Klaus Mainzer

    1999-11-01

    Full Text Available Molecular models are typical topics of chemical research depending on the technical standards of observation, computation, and representation. Mathematically, molecular structures have been represented by means of graph theory, topology, differential equations, and numerical procedures. With the increasing capabilities of computer networks, computational models and computer-assisted visualization become an essential part of chemical research. Object-oriented programming languages create a virtual reality of chemical structures opening new avenues of exploration and collaboration in chemistry. From an epistemic point of view, virtual reality is a new computer-assisted tool of human imagination and recognition.

  8. DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    Javed Ali

    2012-04-01

    Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.

  9. Calculus and design of discrete velocity models using computer algebra

    Science.gov (United States)

    Babovsky, Hans; Grabmeier, Johannes

    2016-11-01

    In [2, 3], a framework for a calculus with Discrete Velocity Models (DVM) has been derived. The rotatonal symmetry of the discrete velocities can be modelled algebraically by the action of the cyclic group C4 - or including reflections of the dihedral group D4. Taking this point of view, the linearized collision operator can be represented in a compact form as a matrix of elements in the group algebra. Or in other words, by choosing a special numbering it exhibits a certain block structure which lets it appear as a matrix with entries in a certain polynomial ring. A convenient way for approaching such a structure is the use of a computer algebra system able to treat these (predefined) algebraic structures. We used the computer algebra system FriCAS/AXIOM [4, 5] for the generation of the velocity and the collision sets and for the analysis of the structure of the collision operator. Concerning the fluid dynamic limit, the system provides the characterization of sets of collisions and their contribution to the flow parameters. It allows the design of rotationally invariant symmetric models for prescribed Prandtl numbers. The implementation in FriCAS/AXIOM is explained and its results for a 25-velocity model are presented.

  10. Oscillating water column structural model

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  11. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  12. Computer Models in Biomechanics From Nano to Macro

    CERN Document Server

    Kuhl, Ellen

    2013-01-01

    This book contains a collection of papers that were presented at the IUTAM Symposium on “Computer Models in Biomechanics: From Nano to Macro” held at Stanford University, California, USA, from August 29 to September 2, 2011. It contains state-of-the-art papers on: - Protein and Cell Mechanics: coarse-grained model for unfolded proteins, collagen-proteoglycan structural interactions in the cornea, simulations of cell behavior on substrates - Muscle Mechanics: modeling approaches for Ca2+–regulated smooth muscle contraction, smooth muscle modeling using continuum thermodynamical frameworks, cross-bridge model describing the mechanoenergetics of actomyosin interaction, multiscale skeletal muscle modeling - Cardiovascular Mechanics: multiscale modeling of arterial adaptations by incorporating molecular mechanisms, cardiovascular tissue damage, dissection properties of aortic aneurysms, intracranial aneurysms, electromechanics of the heart, hemodynamic alterations associated with arterial remodeling followin...

  13. A computational framework for a database of terrestrial biosphere models

    Science.gov (United States)

    Metzler, Holger; Müller, Markus; Ceballos-Núñez, Verónika; Sierra, Carlos A.

    2016-04-01

    Most terrestrial biosphere models consist of a set of coupled ordinary first order differential equations. Each equation represents a pool containing carbon with a certain turnover rate. Although such models share some basic mathematical structures, they can have very different properties such as number of pools, cycling rates, and internal fluxes. We present a computational framework that helps analyze the structure and behavior of terrestrial biosphere models using as an example the process of soil organic matter decomposition. The same framework can also be used for other sub-processes such as carbon fixation or allocation. First, the models have to be fed into a database consisting of simple text files with a common structure. Then they are read in using Python and transformed into an internal 'Model Class' that can be used to automatically create an overview stating the model's structure, state variables, internal and external fluxes. SymPy, a Python library for symbolic mathematics, helps to also calculate the Jacobian matrix at possibly given steady states and the eigenvalues of this matrix. If complete parameter sets are available, the model can also be run using R to simulate its behavior under certain conditions and to support a deeper stability analysis. In this case, the framework is also able to provide phase-plane plots if appropriate. Furthermore, an overview of all the models in the database can be given to help identify their similarities and differences.

  14. Computer modeling for optimal placement of gloveboxes

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Olivas, J.D. [Los Alamos National Lab., NM (United States); Finch, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  15. Evaluation of computer programs used for structural analyses of impact response of spent fuel shipping casks

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, B A; Gwinn, K W

    1984-05-01

    This report presents the results of a study of impact analyses of a generic spent-fuel cask. The study compares the use and results of three different finite element computer codes. Seven different cask-like model analyses are considered. The models encompass both linear and nonlinear geometric and material behavior. On the basis of the analyses results, this report recommends what parameters are useful in the comparison of different structural finite element computer programs. 5 references, 36 figures, 11 tables.

  16. Computational modeling analyses of RNA secondary structures and phylogenetic inference of evolutionary conserved 5S rRNA in the prokaryotes.

    Science.gov (United States)

    Singh, Vijai; Somvanshi, Pallavi

    2009-04-01

    Bacteria are unicellular, ubiquitous microorganisms which grow on soil, acidic hot springs, radioactive wastes, etc. The genome of bacteria constitutes species specific conserved region. The 5S rRNA is one of the most conserved region determined in each bacteria and the size ranges between 110 and 148 bp. On this basis phylogenetic study of 37 bacterial strains was done which results in formation of seven clades and furthermore RNA secondary structure from each clade was made. The lowest free energy (delta G) of the 5S rRNA may divulge the most primitive bacteria and slow changes occurs throughout the evolution whereas the higher free energy indicates less stability during the evolution. The RNA secondary structure may provide new insights to understand bacteria evolution and stability.

  17. Use of Computer Assisted Career Guidance with Prior Cognitive Structuring. Technical Report Number 3.

    Science.gov (United States)

    Shahnasarian, Michael; Peterson, Gary W.

    Cognitive structuring was implemented by showing 30 subjects a 10-minute videotape that presented Holland's (1985) model of the world of work before they used an interactive computer-assisted guidance system (DISCOVER). The effect of prior structuring was assessed in terms of a subject's representation of the world of work, occupational certainty,…

  18. Variability of Protein Structure Models from Electron Microscopy.

    Science.gov (United States)

    Monroe, Lyman; Terashi, Genki; Kihara, Daisuke

    2017-03-02

    An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications.

  19. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  20. Cognitive control in majority search: A computational modeling approach

    Directory of Open Access Journals (Sweden)

    Hongbin eWang

    2011-02-01

    Full Text Available Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided. The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain via the V4-ACC-LPFC-IPS loop for computing the majority function.

  1. Cognitive control in majority search: a computational modeling approach.

    Science.gov (United States)

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function.

  2. Computational modeling of a forward lunge

    DEFF Research Database (Denmark)

    Eriksen, Tine Alkjær; Wieland, Maja Rose; Andersen, Michael Skipper

    2012-01-01

    during forward lunging. Thus, the purpose of the present study was to establish a musculoskeletal model of the forward lunge to computationally investigate the complete mechanical force equilibrium of the tibia during the movement to examine the loading pattern of the cruciate ligaments. A healthy female...... was selected out of a group of healthy subjects, who all performed a forward lunge on a force platform, targeting a knee flexion angle of 90˚. Skin-markers were placed on anatomical landmarks on the subject and the movement was recorded by five video cameras. The three-dimensional kinematic data describing...... the forward lunge movement were extracted and used to develop a biomechanical model of the lunge movement. The model comprised two legs including femur, crus, rigid foot segments and the pelvis. Each leg had 35 independent muscle units, which were recruited according to a minimum fatigue criterion...

  3. Computer model for analyzing sodium cold traps

    Energy Technology Data Exchange (ETDEWEB)

    McPheeters, C C; Raue, D J

    1983-05-01

    A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions. The calculated pressure drop as a function of impurity mass content determines the capacity of the cold trap. The accuracy of the model was checked by comparing calculated mass distributions with experimentally determined mass distributions from literature publications and with results from our own cold trap experiments. The comparisons were excellent in all cases. A parametric study was performed to determine which design variables are most important in maximizing cold trap capacity.

  4. A Graph Model for Imperative Computation

    CERN Document Server

    McCusker, Guy

    2009-01-01

    Scott's graph model is a lambda-algebra based on the observation that continuous endofunctions on the lattice of sets of natural numbers can be represented via their graphs. A graph is a relation mapping finite sets of input values to output values. We consider a similar model based on relations whose input values are finite sequences rather than sets. This alteration means that we are taking into account the order in which observations are made. This new notion of graph gives rise to a model of affine lambda-calculus that admits an interpretation of imperative constructs including variable assignment, dereferencing and allocation. Extending this untyped model, we construct a category that provides a model of typed higher-order imperative computation with an affine type system. An appropriate language of this kind is Reynolds's Syntactic Control of Interference. Our model turns out to be fully abstract for this language. At a concrete level, it is the same as Reddy's object spaces model, which was the first "...

  5. Computational modeling and validation studies of 3-D structure of neuraminidase protein of H1N1 influenza A virus and subsequent in silico elucidation of piceid analogues as its potent inhibitors.

    Science.gov (United States)

    Gupta, Chhedi Lal; Akhtar, Salman; Bajpaib, Preeti; Kandpal, K N; Desai, G S; Tiwari, Ashok K

    2013-01-01

    Emergence of the drug resistant variants of the Influenza A virus in the recent years has aroused a great need for the development of novel neuraminidase inhibitors for controlling the pandemic. The neuraminidase (NA) protein of the influenza virus has been the most potential target for the anti-influenza. However, in the absence of any experimental structure of the drug targeting NA protein of H1N1 influenza A virus as zanamivir and oseltamivir, the comprehensive study of the interaction of the drug molecules with the target protein has been missing. Hence in this study a computational 3-D structure of neuraminidase of H1N1 influenza A virus has been developed using homology modeling technique, and the same was validated for its reliability by ProSA web server in term of energy profile & Z scores and PROCHECK program followed by Ramachandran plot. Further, the developed 3-D model had been employed for docking studies with the class of compounds as Piceid and its analogs. In this context, two novel compounds (ChemBank ID 2110359 and 3075417) were found to be more potent inhibitors of neuraminidase than control drugs as zanamivir and oseltamivir in terms of their robust binding energies, strong inhibition constant (Ki) and better hydrogen bond interactions between the protein-ligand complex. The interaction of these compounds with NA protein has been significantly studied at the molecular level.

  6. Probabilistic Modeling of Timber Structures

    DEFF Research Database (Denmark)

    Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2005-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present pro...... probabilistic model for these basic properties is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...

  7. Structure problems in the analog computation; Problemes de structure dans le calcul analogique

    Energy Technology Data Exchange (ETDEWEB)

    Braffort, P.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1957-07-01

    The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)

  8. A new fire insulation performance computational model of structural members%结构构件耐火隔热性计算模型

    Institute of Scientific and Technical Information of China (English)

    王荣辉; 张靖岩; 史毅; 吕振纲; 仝玉; 王礼; 张泽江

    2012-01-01

    Based on heat transfer theory, and in the standard conditions of temperature curve, taking vermiculite fire door as an example, a temperature rise prediction model of fire door on the backs was built. The fire door was divided into door and doorframe, and the thermal conductivity and thermal resistancewere decided separately. By heat balance equation on the backs of fire door, the temperature rise prediction model was built. It was shown by the contrast results between model and experiment that the error of the model is acceptable.%根据传热学理论,在标准温升曲线条件下,以蛭石防火门为例,建立了防火门背火面温升简化预测模型.将防火门分为门扇和门框两部分,分别确定各部分的导热系数、热阻,通过对防火门背火面建立热平衡公式,建立温升预测模型.与实际测试结果的对比分析表明,预测模型结果误差较小.

  9. Protein 3D structure computed from evolutionary sequence variation.

    Directory of Open Access Journals (Sweden)

    Debora S Marks

    Full Text Available The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org. This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of

  10. Protein 3D structure computed from evolutionary sequence variation.

    Science.gov (United States)

    Marks, Debora S; Colwell, Lucy J; Sheridan, Robert; Hopf, Thomas A; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α)-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures

  11. An improved damaging model for structured clays

    Institute of Scientific and Technical Information of China (English)

    姜岩; 雷华阳; 郑刚; 徐舜华

    2008-01-01

    An improved damaging model formulated within the framework of bounding surface for structured clays was proposed. The model was intended to describe the effects of structure degradation due to geotechnical loading. The predictive capability of the model was compared with those of triaxial compression test on Tianjin soft clays. The results show that, by incorporating a new damage function into the model, the reduction of elastic bulk and shear modulus with elastic deformations and the reduction of plastic bulk modulus and shear modulus with plastic deformations are able to be appreciable. Before the axial strain reaches 15%, the axial strain computed from the model is smaller than that from the test under the drained condition. Under the undrained condition, after the axial strain reaches 1%, the axial strain increases quickly because of the complete loss of structure and stiffness; and the result computed from the model is nearly equal to that from the model without the incorporation of the damage function due to less plastic strain under undrained condition test.

  12. Computational Tools To Model Halogen Bonds in Medicinal Chemistry.

    Science.gov (United States)

    Ford, Melissa Coates; Ho, P Shing

    2016-03-10

    The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry.

  13. Modeling Reality - How Computers Mirror Life

    Science.gov (United States)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  14. COMPUTER MODELING OF EMBRYONIC MORTALITY AT CRIOCONSERVATION

    Directory of Open Access Journals (Sweden)

    Gorbunov,

    2016-08-01

    Full Text Available The purpose of the research was to determine the regularities of influence of mammalian embryos heterogeneity and effectiveness of cryoconservation steps on their viability by using the developed simulation model. The model is based on analytical expressions that reflect the main causes of embryonic mortality during in vitro and in vivo cultivation, crioconservation and embryo transplantation. Reduction of viability depends on a set of biological factors such as the animal special, donor and recipient state, quality of embryos, and of technological ones such as the efficiency of cryopreservation method, and embryo transplantation. Fulfilled computer experiment showed, that divergence of embryos viability depending on biological parameters variations changes in a range from 0 to 100%, whereas efficiency index of chosen technology has an inaccuracy about 1%. The comparative analysis of alternative technologies of embryos cryopreservation showed the maximum efficiency of stages of use of the cryoprotectant, freezing regime and in vitro and in vivo cultivation of biological object. The application of computer modeling gives an opportunity to reduce the range of embryos viability results, obtained in different experiments is many times, thereby to shorten the time, monetary costs and the slaughter of laboratory animals in obtaining reliable results.

  15. Heart Modeling, Computational Physiology and the IUPS Physiome Project

    Science.gov (United States)

    Hunter, Peter J.

    The Physiome Project of the International Union of Physiological Sciences (IUPS) is attempting to provide a comprehensive framework for modelling the human body using computational methods which can incorporate the biochemistry, biophysics and anatomy of cells, tissues and organs. A major goal of the project is to use computational modelling to analyse integrative biological function in terms of underlying structure and molecular mechanisms. To support that goal the project is developing XML markup languages (CellML & FieldML) for encoding models, and software tools for creating, visualizing and executing these models. It is also establishing web-accessible physiological databases dealing with model-related data at the cell, tissue, organ and organ system levels. Two major developments in current medicine are, on the one hand, the much publicised genomics (and soon proteomics) revolution and, on the other, the revolution in medical imaging in which the physiological function of the human body can be studied with a plethora of imaging devices such as MRI, CT, PET, ultrasound, electrical mapping, etc. The challenge for the Physiome Project is to link these two developments for an individual - to use complementary genomic and medical imaging data, together with computational modelling tailored to the anatomy, physiology and genetics of that individual, for patient-specific diagnosis and treatment.

  16. Structure-based design of a potent and selective small peptide inhibitor of Mycobacterium tuberculosis 6-hydroxymethyl-7, 8-dihydropteroate synthase: a computer modelling approach.

    Science.gov (United States)

    Rao, Gita Subba; Kumar, Manoj

    2008-06-01

    In an attempt to design novel anti-TB drugs, the target chosen is the enzyme 6-hydroxymethyl-7,8-dihydropteroate synthase (DHPS), which is an attractive target since it is present in microorganisms but not in humans. The existing drugs for this target are the sulfa drugs, which have been used for about seven decades. However, single mutations in the DHPS gene can cause resistance to sulfa drugs. Therefore, there is a need for the design of novel drugs. Based on the recently determined crystal structure of Mycobacterium tuberculosis (M.tb) DHPS complexed with a known substrate analogue, and on the crystal structures of E. coli DHPS and Staphylococcus aureus DHPS, we have identified a dipeptide inhibitor with the sequence WK. Docking calculations indicate that this peptide has a significantly higher potency than the sulfa drugs. In addition, the potency is 70-90 times higher for M.tb DHPS as compared to that for the pterin and folate-binding sites of key human proteins. Thus, the designed inhibitor is a promising lead compound for the development of novel antimycobcaterial agents.

  17. SPAR Model Structural Efficiencies

    Energy Technology Data Exchange (ETDEWEB)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  18. Optimization and mathematical modeling in computer architecture

    CERN Document Server

    Sankaralingam, Karu; Nowatzki, Tony

    2013-01-01

    In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms t

  19. Dynamical Models for Computer Viruses Propagation

    Directory of Open Access Journals (Sweden)

    José R. C. Piqueira

    2008-01-01

    Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.

  20. Design and Realization of Structural Security Model Based on Trusted Computing%基于可信计算的结构性安全模型设计与实现

    Institute of Scientific and Technical Information of China (English)

    宗涛

    2012-01-01

    鉴于可信计算可以弥补传统安全防护技术在构架设计和防护强度上存在的安全风险,提出一种可信计算安全模型,从信任链着手,将嵌入式可信安全模块、智能卡等模块引入可信计算平台,对关键技术的实现进行介绍,包括以J3210为核心的可信硬件平台、嵌入式操作系统JetOS、BIOS安全增强、操作系统的安全增强以及基于智能卡的用户身份认证.%In view of the trusted computing can make up for the traditional security protection technology in architecture design and protection on the strength of security risks exist, this paper puts forward a kind of structural security model. From trust chain, it introduces the embedded trusted security module and one smart card module into trusted computing platform. The realization methods of the key technologies of J3210, JetOS, BIOS enhancement, OS security enhancement and user identity authentication based on smart card are given.

  1. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  2. Toxicity Assessment of Atrazine and Related Triazine Compounds in the Microtox Assay, and Computational Modeling for Their Structure-Activity Relationship

    Directory of Open Access Journals (Sweden)

    Jerzy Leszczynski

    2000-10-01

    Full Text Available The triazines are a group of chemically similar herbicides including atrazine, cyanazine, and propazine, primarily used to control broadleaf weeds. About 64 to 80 million lbs of atrazine alone are used each year in the United States, making it one of the two most widely used pesticides in the country. All triazines are somewhat persistent in water and mobile in soil. They are among the most frequently detected pesticides in groundwater. They are considered as possible human carcinogens (Group C based on an increase in mammary gland tumors in female laboratory animals. In this research, we performed the Microtox Assay to investigate the acute toxicity of a significant number of triazines including atrazine, atraton, ametryne, bladex, prometryne, and propazine, and some of their degradation products including atrazine desethyl, atrazine deisopropyl, and didealkyled triazine. Tests were carried out as described by Azur Environmental [1]. The procedure measured the relative acute toxicity of triazines, producing data for the calculation of triazine concentrations effecting 50% reduction in bioluminescence (EC50s. Quantitative structure-activity relationships (QSAR were examined based on the molecular properties obtained from quantum mechanical predictions performed for each compound. Toxicity tests yielded EC50 values of 39.87, 273.20, 226.80, 36.96, 81.86, 82.68, 12.74, 11.80, and 78.50 mg/L for atrazine, propazine, prometryne, atraton, atrazine desethyl, atrazine deisopropyl, didealkylated triazine, ametryne, and bladex, respectively; indicating that ametryne was the most toxic chemical while propazine was the least toxic. QSAR evaluation resulted in a coefficient of determination (r2 of 0.86, indicating a good value of toxicity prediction based on the chemical structures/properties of tested triazines.

  3. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  4. Partitioned Fluid-Structure Interaction for Full Rotor Computations Using CFD

    DEFF Research Database (Denmark)

    Heinz, Joachim Christian

    ) based aerodynamic model which is computationally cheap but includes several limitations and corrections in order to account for three-dimensional and unsteady eects. The present work discusses the development of an aero-elastic simulation tool where high-fidelity computational fluid dynamics (CFD......In the design of modern wind turbines with long and slender rotor blades it becomes increasingly important to model and understand the evolving aero-elastic eects in more details. Standard stateof-the-art aero-elastic simulation tools for wind turbines usually employ a blade element momentum (BEM......) is used to model the aerodynamics of the flexible wind turbine rotor. Respective CFD computations are computationally expensive but do not show the limitations of the BEM-based models. It is one of the first times that high-fidelity fluid-structure interaction (FSI) simulations are used to model the aero...

  5. Computational design of patterned interfaces using reduced order models

    Science.gov (United States)

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  6. Computer analysis of microporous structure by employing the LBET class models with various variants of the adsorption energy distribution in comparison to the classical equations.

    Science.gov (United States)

    Kwiatkowski, Mirosław

    2007-02-27

    The paper presents a properties study of the new LBET class models for heterogeneous multilayer adsorption and its applicability to analysis of microporous carbonaceous adsorbents in comparison to the selected classical equations. This paper shows that the LBET formulas give a good insight into the pore size distribution and dominant pore shape. Moreover, they provide more reliable evaluation of material surface area than the popular classical equations. This research constitutes a significant development and completion of the author's earlier works and provides a basis for the evaluation of reliability of parameters calculated for real systems.

  7. Computational analysis of RNA structures with chemical probing data.

    Science.gov (United States)

    Ge, Ping; Zhang, Shaojie

    2015-06-01

    RNAs play various roles, not only as the genetic codes to synthesize proteins, but also as the direct participants of biological functions determined by their underlying high-order structures. Although many computational methods have been proposed for analyzing RNA structures, their accuracy and efficiency are limited, especially when applied to the large RNAs and the genome-wide data sets. Recently, advances in parallel sequencing and high-throughput chemical probing technologies have prompted the development of numerous new algorithms, which can incorporate the auxiliary structural information obtained from those experiments. Their potential has been revealed by the secondary structure prediction of ribosomal RNAs and the genome-wide ncRNA function annotation. In this review, the existing probing-directed computational methods for RNA secondary and tertiary structure analysis are discussed.

  8. Computational modeling of the obstructive lung diseases asthma and COPD.

    Science.gov (United States)

    Burrowes, Kelly Suzanne; Doel, Tom; Brightling, Chris

    2014-11-28

    Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow imitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the currentstate

  9. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    Science.gov (United States)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  10. Numerical Modelling of Structures with Uncertainties

    Directory of Open Access Journals (Sweden)

    Kahsin Maciej

    2017-04-01

    Full Text Available The nature of environmental interactions, as well as large dimensions and complex structure of marine offshore objects, make designing, building and operation of these objects a great challenge. This is the reason why a vast majority of investment cases of this type include structural analysis, performed using scaled laboratory models and complemented by extended computer simulations. The present paper focuses on FEM modelling of the offshore wind turbine supporting structure. Then problem is studied using the modal analysis, sensitivity analysis, as well as the design of experiment (DOE and response surface model (RSM methods. The results of modal analysis based simulations were used for assessing the quality of the FEM model against the data measured during the experimental modal analysis of the scaled laboratory model for different support conditions. The sensitivity analysis, in turn, has provided opportunities for assessing the effect of individual FEM model parameters on the dynamic response of the examined supporting structure. The DOE and RSM methods allowed to determine the effect of model parameter changes on the supporting structure response.

  11. A computational model for cancer growth by using complex networks

    Science.gov (United States)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  12. Computational models of intergroup competition and warfare.

    Energy Technology Data Exchange (ETDEWEB)

    Letendre, Kenneth (University of New Mexico); Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  13. Computer modeling of thermoelectric generator performance

    Science.gov (United States)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  14. A computational model of motor neuron degeneration.

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations.

  15. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  16. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  17. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  18. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  19. Development and applications of two computational procedures for determining the vibration modes of structural systems. [aircraft structures - aerospaceplanes

    Science.gov (United States)

    Kvaternik, R. G.

    1975-01-01

    Two computational procedures for analyzing complex structural systems for their natural modes and frequencies of vibration are presented. Both procedures are based on a substructures methodology and both employ the finite-element stiffness method to model the constituent substructures. The first procedure is a direct method based on solving the eigenvalue problem associated with a finite-element representation of the complete structure. The second procedure is a component-mode synthesis scheme in which the vibration modes of the complete structure are synthesized from modes of substructures into which the structure is divided. The analytical basis of the methods contains a combination of features which enhance the generality of the procedures. The computational procedures exhibit a unique utilitarian character with respect to the versatility, computational convenience, and ease of computer implementation. The computational procedures were implemented in two special-purpose computer programs. The results of the application of these programs to several structural configurations are shown and comparisons are made with experiment.

  20. Models as coherent sign structures

    NARCIS (Netherlands)

    Gazendam, H.W.M.; Jorna, R.J.J.M.; Gazendam, H.W.M.; Cijsouw, R.S.

    2003-01-01

    This chapter explains how models function as the glue that keeps organizations together. In an analysis of models from a semiotic and cognitive point of view, assumptions about evolutionary dynamics and bounded rationality are used. It is concluded that a model is a coherent sign structure,

  1. PDBparam: Online Resource for Computing Structural Parameters of Proteins.

    Science.gov (United States)

    Nagarajan, R; Archana, A; Thangakani, A Mary; Jemimah, S; Velmurugan, D; Gromiha, M Michael

    2016-01-01

    Understanding the structure-function relationship in proteins is a longstanding goal in molecular and computational biology. The development of structure-based parameters has helped to relate the structure with the function of a protein. Although several structural features have been reported in the literature, no single server can calculate a wide-ranging set of structure-based features from protein three-dimensional structures. In this work, we have developed a web-based tool, PDBparam, for computing more than 50 structure-based features for any given protein structure. These features are classified into four major categories: (i) interresidue interactions, which include short-, medium-, and long-range interactions, contact order, long-range order, total contact distance, contact number, and multiple contact index, (ii) secondary structure propensities such as α-helical propensity, β-sheet propensity, and propensity of amino acids to exist at various positions of α-helix and amino acid compositions in high B-value regions, (iii) physicochemical properties containing ionic interactions, hydrogen bond interactions, hydrophobic interactions, disulfide interactions, aromatic interactions, surrounding hydrophobicity, and buriedness, and (iv) identification of binding site residues in protein-protein, protein-nucleic acid, and protein-ligand complexes. The server can be freely accessed at http://www.iitm.ac.in/bioinfo/pdbparam/. We suggest the use of PDBparam as an effective tool for analyzing protein structures.

  2. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  3. Novel lavendamycin analogues as antitumor agents: synthesis, in vitro cytotoxicity, structure-metabolism, and computational molecular modeling studies with NAD(P)H:quinone oxidoreductase 1.

    Science.gov (United States)

    Hassani, Mary; Cai, Wen; Holley, David C; Lineswala, Jayana P; Maharjan, Babu R; Ebrahimian, G Reza; Seradj, Hassan; Stocksdale, Mark G; Mohammadi, Farahnaz; Marvin, Christopher C; Gerdes, John M; Beall, Howard D; Behforouz, Mohammad

    2005-12-01

    Novel lavendamycin analogues with various substituents were synthesized and evaluated as potential NAD(P)H:quinone oxidoreductase (NQO1)-directed antitumor agents. Pictet-Spengler condensation of quinoline- or quninoline-5,8-dione aldehydes with tryptamine or tryptophans yielded the lavendamycins. Metabolism studies with recombinant human NQO1 revealed that addition of NH2 and CH2OH groups at the quinolinedione-7-position and indolopyridine-2'-position had the greatest positive impact on substrate specificity. The best and poorest substrates were 37 (2'-CH2OH-7-NH2 derivative) and 31 (2'-CONH2-7-NHCOC3H7-n derivative) with reduction rates of 263 +/- 30 and 0.1 +/- 0.1 micromol/min/mg NQO1, respectively. Cytotoxicity toward human colon adenocarcinoma cells was determined for the lavendamycins. The best substrates for NQO1 were also the most selectively toxic to the NQO1-rich BE-NQ cells compared to NQO1-deficient BE-WT cells with 37 as the most selective. Molecular docking supported a model in which the best substrates were capable of efficient hydrogen-bonding interactions with key residues of the active site along with hydride ion reception.

  4. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  5. Structural modeling of sandwich structures with lightweight cellular cores

    Science.gov (United States)

    Liu, T.; Deng, Z. C.; Lu, T. J.

    2007-10-01

    An effective single layered finite element (FE) computational model is proposed to predict the structural behavior of lightweight sandwich panels having two dimensional (2D) prismatic or three dimensional (3D) truss cores. Three different types of cellular core topology are considered: pyramidal truss core (3D), Kagome truss core (3D) and corrugated core (2D), representing three kinds of material anisotropy: orthotropic, monoclinic and general anisotropic. A homogenization technique is developed to obtain the homogenized macroscopic stiffness properties of the cellular core. In comparison with the results obtained by using detailed FE model, the single layered computational model can give acceptable predictions for both the static and dynamic behaviors of orthotropic truss core sandwich panels. However, for non-orthotropic 3D truss cores, the predictions are not so well. For both static and dynamic behaviors of a 2D corrugated core sandwich panel, the predictions derived by the single layered computational model is generally acceptable when the size of the unit cell varies within a certain range, with the predictions for moderately strong or strong corrugated cores more accurate than those for weak cores.

  6. Structural modeling of sandwich structures with lightweight cellular cores

    Institute of Scientific and Technical Information of China (English)

    T. Liu; Z. C. Deng; T. J. Lu

    2007-01-01

    An effective single layered finite element (FE) computational model is proposed to predict the structural behavior of lightweight sandwich panels having two dimensional (2D) prismatic or three dimensional (3D) truss cores.Three different types of cellular core topology are considered: pyramidal truss core (3D), Kagome truss core (3D) and corrugated core (2D), representing three kinds of material anisotropy: orthotropic, monoclinic and general anisotropic. A homogenization technique is developed to obtain the homogenized macroscopic stiffness properties of the cellular core. In comparison with the results obtained by using detailed FE model, the single layered computational model cangive acceptable predictions for both the static and dynamic behaviors of orthotropic truss core sandwich panels. However, for non-orthotropic 3D truss cores, the predictions are not so well. For both static and dynamic behaviors of a 2D corrugated core sandwich panel, the predictions derived by the single layered computational model is generally acceptable when the size of the unit cell varies within a certain range, with the predictions for moderately strong or strong corrugated cores more accurate than those for weak cores.

  7. The Importance of Computational Modeling of Large Pumping Stations

    Directory of Open Access Journals (Sweden)

    S. M. Bozh'eva

    2015-01-01

    Full Text Available The article presents main design and structure principles of pumping stations. It specifies basic requirements for the favorable hydraulic operation conditions of the pumping units. The article also describes the designing cases, when computational modeling is necessary to analyse activity of pumping station and provide its reliable operation. A specific example of the large pumping station with submersible pumps describes the process of computational modeling of its operation. As the object of simulation was selected the underground pumping station with a diameter of 26 m and a depth of 13 m, divided into two independent branches, equipped with 8 submersible pumps. The objective of this work was to evaluate the effectiveness of the design solution by CFD methods, to analyze the design of the inlet chamber, to identify possible difficulties with the operation of the facility. In details are described the structure of the considered pumping station and applied computational models of physical processes. The article gives the detailed formulation of the task of simulation and the methods of its solving and presents the initial and boundary conditions. It describes the basic operation modes of the pumping station. The obtained results were presented as the flow patterns for each operation mode with detailed explanations. Data obtained as a result of CFD, prove the correctness of the general design solutions of the project. The submersible pump operation at the minimum water level was verified, was confirmed a lack of vortex formation as well as were proposed measures to improve the operating conditions of the facility. In the inlet chamber there are shown the stagnant zones, requiring separate schedule of cleaning. The measure against floating debris and foam was proposed. It justifies the use of computational modeling (CFD for the verifying and adjusting of the projects of large pumping stations as a much more precise tool that takes into account

  8. Computational Models to Synthesize Human Walking

    Institute of Scientific and Technical Information of China (English)

    Lei Ren; David Howard; Laurence Kenney

    2006-01-01

    The synthesis of human walking is of great interest in biomechanics and biomimetic engineering due to its predictive capabilities and potential applications in clinical biomechanics, rehabilitation engineering and biomimetic robotics. In this paper,the various methods that have been used to synthesize humanwalking are reviewed from an engineering viewpoint. This involves a wide spectrum of approaches, from simple passive walking theories to large-scale computational models integrating the nervous, muscular and skeletal systems. These methods are roughly categorized under four headings: models inspired by the concept of a CPG (Central Pattern Generator), methods based on the principles of control engineering, predictive gait simulation using optimisation, and models inspired by passive walking theory. The shortcomings and advantages of these methods are examined, and future directions are discussed in the context of providing insights into the neural control objectives driving gait and improving the stability of the predicted gaits. Future advancements are likely to be motivated by improved understanding of neural control strategies and the subtle complexities of the musculoskeletal system during human locomotion. It is only a matter of time before predictive gait models become a practical and valuable tool in clinical diagnosis, rehabilitation engineering and robotics.

  9. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  10. Distributed Prognostics based on Structural Model Decomposition

    Science.gov (United States)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  11. Clathrate Structure Determination by Combining Crystal Structure Prediction with Computational and Experimental (129) Xe NMR Spectroscopy.

    Science.gov (United States)

    Selent, Marcin; Nyman, Jonas; Roukala, Juho; Ilczyszyn, Marek; Oilunkaniemi, Raija; Bygrave, Peter J; Laitinen, Risto; Jokisaari, Jukka; Day, Graeme M; Lantto, Perttu

    2017-01-23

    An approach is presented for the structure determination of clathrates using NMR spectroscopy of enclathrated xenon to select from a set of predicted crystal structures. Crystal structure prediction methods have been used to generate an ensemble of putative structures of o- and m-fluorophenol, whose previously unknown clathrate structures have been studied by (129) Xe NMR spectroscopy. The high sensitivity of the (129) Xe chemical shift tensor to the chemical environment and shape of the crystalline cavity makes it ideal as a probe for porous materials. The experimental powder NMR spectra can be used to directly confirm or reject hypothetical crystal structures generated by computational prediction, whose chemical shift tensors have been simulated using density functional theory. For each fluorophenol isomer one predicted crystal structure was found, whose measured and computed chemical shift tensors agree within experimental and computational error margins and these are thus proposed as the true fluorophenol xenon clathrate structures.

  12. Advances in computational dynamics of particles, materials and structures a unified approach

    CERN Document Server

    Har, Jason

    2012-01-01

    Computational methods for the modeling and simulation of the dynamic response and behavior of particles, materials and structural systems have had a profound influence on science, engineering and technology. Complex science and engineering applications dealing with complicated structural geometries and materials that would be very difficult to treat using analytical methods have been successfully simulated using computational tools. With the incorporation of quantum, molecular and biological mechanics into new models, these methods are poised to play an even bigger role in the future. Ad

  13. Information and computer-aided system for structural materials

    Energy Technology Data Exchange (ETDEWEB)

    Nekrashevitch, Yu.G.; Nizametdinov, Sh.U.; Polkovnikov, A.V.; Rumjantzev, V.P.; Surina, O.N. (Engineering Physics Inst., Moscow (Russia)); Kalinin, G.M.; Sidorenkov, A.V.; Strebkov, Yu.S. (Research and Development Inst. of Power Engineering, Moscow (Russia))

    1992-09-01

    An information and computer-aided system for structural materials data has been developed to provide data for the fusion and fission reactor system design. It is designed for designers, industrial engineers, and material science specialists and provides a friendly interface in an interactive mode. The database for structural materials contains the master files: Chemical composition, physical, mechanical, corrosion, technological properties, regulatory and technical documentation. The system is implemented on a PC/AT running the PS /2 operating system. (orig.).

  14. Computing a new family of shape descriptors for protein structures

    DEFF Research Database (Denmark)

    Røgen, Peter; Sinclair, Robert

    2003-01-01

    The large-scale 3D structure of a protein can be represented by the polygonal curve through the carbon a atoms of the protein backbone. We introduce an algorithm for computing the average number of times that a given configuration of crossings on such polygonal curves is seen, the average being...... taken over all directions in space. Hereby, we introduce a new family of global geometric measures of protein structures, which we compare with the so-called generalized Gauss integrals....

  15. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  16. Computation material science of structural-phase transformation in casting aluminium alloys

    Science.gov (United States)

    Golod, V. M.; Dobosh, L. Yu

    2017-04-01

    Successive stages of computer simulation the formation of the casting microstructure under non-equilibrium conditions of crystallization of multicomponent aluminum alloys are presented. On the basis of computer thermodynamics and heat transfer during solidification of macroscale shaped castings are specified the boundary conditions of local heat exchange at mesoscale modeling of non-equilibrium formation the solid phase and of the component redistribution between phases during coalescence of secondary dendrite branches. Computer analysis of structural - phase transitions based on the principle of additive physico-chemical effect of the alloy components in the process of diffusional - capillary morphological evolution of the dendrite structure and the o of local dendrite heterogeneity which stochastic nature and extent are revealed under metallographic study and modeling by the Monte Carlo method. The integrated computational materials science tools at researches of alloys are focused and implemented on analysis the multiple-factor system of casting processes and prediction of casting microstructure.

  17. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    Science.gov (United States)

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour

  18. Computational methods to determine the structure of hydrogen storage materials

    Science.gov (United States)

    Mueller, Tim

    2009-03-01

    To understand the mechanisms and thermodynamics of material-based hydrogen storage, it is important to know the structure of the material and the positions of the hydrogen atoms within the material. Because hydrogen can be difficult to resolve experimentally computational research has proven to be a valuable tool to address these problems. We discuss different computational methods for identifying the structure of hydrogen materials and the positions of hydrogen atoms, and we illustrate the methods with specific examples. Through the use of ab-initio molecular dynamics, we identify molecular hydrogen binding sites in the metal-organic framework commonly known as MOF-5 [1]. We present a method to identify the positions of atomic hydrogen in imide structures using a novel type of effective Hamiltonian. We apply this new method to lithium imide (Li2NH), a potentially important hydrogen storage material, and demonstrate that it predicts a new ground state structure [2]. We also present the results of a recent computational study of the room-temperature structure of lithium imide in which we suggest a new structure that reconciles the differences between previous experimental and theoretical studies. [4pt] [1] T. Mueller and G. Ceder, Journal of Physical Chemistry B 109, 17974 (2005). [0pt] [2] T. Mueller and G. Ceder, Physical Review B 74 (2006).

  19. Exploratory Topology Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin;

    2016-01-01

    The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS tha...

  20. Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, P. T.

    2010-02-08

    This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

  1. Evaluation of Marine Corps Manpower Computer Simulation Model

    Science.gov (United States)

    2016-12-01

    MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL by Eric S. Anderson December 2016 Thesis Advisor: Arnold Buss Second Reader: Neil Rowe...Master’s thesis 4. TITLE AND SUBTITLE EVALUATION OF MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL 5. FUNDING NUMBERS ACCT: 622716 JON...overall end strength are maintained. To assist their mission, an agent-based computer simulation model was developed in the Java computer language

  2. Computational Structures Technology for Airframes and Propulsion Systems

    Science.gov (United States)

    Noor, Ahmed K. (Compiler); Housner, Jerrold M. (Compiler); Starnes, James H., Jr. (Compiler); Hopkins, Dale A. (Compiler); Chamis, Christos C. (Compiler)

    1992-01-01

    This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory.

  3. Computer program simplifies selection of structural steel columns

    Science.gov (United States)

    Vissing, G. S.

    1966-01-01

    Computer program rapidly selects appropriate size steel columns and base plates for construction of multistory structures. The program produces a printed record containing the size of a section required at a particular elevation, the stress produced by the loads, and the allowable stresses for that section.

  4. Computer simulation study of water using a fluctuating charge model

    Indian Academy of Sciences (India)

    M Krishnan; A Verma; S Balasubramanian

    2001-10-01

    Hydrogen bonding in small water clusters is studied through computer simulation methods using a sophisticated, empirical model of interaction developed by Rick et al (S W Rick, S J Stuart and B J Berne 1994 J. Chem. Phys. 101 6141) and others. The model allows for the charges on the interacting sites to fluctuate as a function of time, depending on their local environment. The charge flow is driven by the difference in the electronegativity of the atoms within the water molecule, thus effectively mimicking the effects of polarization of the charge density. The potential model is thus transferable across all phases of water. Using this model, we have obtained the minimum energy structures of water clusters up to a size of ten. The cluster structures agree well with experimental data. In addition, we are able to distinctly identify the hydrogens that form hydrogen bonds based on their charges alone, a feature that is not possible in simulations using fixed charge models. We have also studied the structure of liquid water at ambient conditions using this fluctuating charge model.

  5. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  6. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  7. Computational Granular Dynamics Models and Algorithms

    CERN Document Server

    Pöschel, Thorsten

    2005-01-01

    Computer simulations not only belong to the most important methods for the theoretical investigation of granular materials, but also provide the tools that have enabled much of the expanding research by physicists and engineers. The present book is intended to serve as an introduction to the application of numerical methods to systems of granular particles. Accordingly, emphasis is placed on a general understanding of the subject rather than on the presentation of the latest advances in numerical algorithms. Although a basic knowledge of C++ is needed for the understanding of the numerical methods and algorithms in the book, it avoids usage of elegant but complicated algorithms to remain accessible for those who prefer to use a different programming language. While the book focuses more on models than on the physics of granular material, many applications to real systems are presented.

  8. Modeling groundwater flow on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  9. Patient-Specific Computational Modeling of Human Phonation

    Science.gov (United States)

    Xue, Qian; Zheng, Xudong; University of Maine Team

    2013-11-01

    Phonation is a common biological process resulted from the complex nonlinear coupling between glottal aerodynamics and vocal fold vibrations. In the past, the simplified symmetric straight geometric models were commonly employed for experimental and computational studies. The shape of larynx lumen and vocal folds are highly three-dimensional indeed and the complex realistic geometry produces profound impacts on both glottal flow and vocal fold vibrations. To elucidate the effect of geometric complexity on voice production and improve the fundamental understanding of human phonation, a full flow-structure interaction simulation is carried out on a patient-specific larynx model. To the best of our knowledge, this is the first patient-specific flow-structure interaction study of human phonation. The simulation results are well compared to the established human data. The effects of realistic geometry on glottal flow and vocal fold dynamics are investigated. It is found that both glottal flow and vocal fold dynamics present a high level of difference from the previous simplified model. This study also paved the important step toward the development of computer model for voice disease diagnosis and surgical planning. The project described was supported by Grant Number ROlDC007125 from the National Institute on Deafness and Other Communication Disorders (NIDCD).

  10. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  11. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    Science.gov (United States)

    2013-12-01

    action, volume 53 of Lecture Notes in Computational Science and Engineering, 82–100, Springer , 2006. [24] K.-U. Bletzinger, R. Wuchner, and A. Kupzok...Lecture Notes in Computational Sci- ence and Engineering, 336–355, Springer , 2006. [25] Y. Bazilevs, V.M. Calo, T.J.R. Hughes, and Y. Zhang...structure interaction: Methods and applica- tion to cerebral aneurysms”, Biomechanics and Modeling in Mechanobiology, 9 (2010) 481–498. [32] Y. Bazilevs, M.-C

  12. Implementing a class of structural change tests: An econometric computing approach

    OpenAIRE

    Zeileis, Achim

    2004-01-01

    The implementation of a recently suggested class of structural change tests, which test for parameter instability in general parametric models, in the R language for statistical computing is described: Focus is given to the question how the conceptual tools can be translated into computational tools that reflect the properties and flexiblity of the underlying econometric metholody while being numerically reliable and easy to use. More precisely, the class of generalized M-fluctuation tests (Z...

  13. Gravothermal Star Clusters - Theory and Computer Modelling

    Science.gov (United States)

    Spurzem, Rainer

    2010-11-01

    In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.

  14. Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong

    2015-01-01

    Fluid-structure interaction simulations are routinely used in the wind energy industry to evaluate the aerodynamic and structural dynamic performance of wind turbines. Most aero-elastic codes in modern times implement a blade element momentum technique to model the rotor aerodynamics and a modal......, multi-body, or finite-element approach to model the turbine structural dynamics. The present paper describes a novel fluid-structure coupling technique which combines a threedimensional viscous-inviscid solver for horizontal-axis wind-turbine aerodynamics, called MIRAS, and the structural dynamics model...... used in the aero-elastic code FLEX5. The new code, MIRASFLEX, in general shows good agreement with the standard aero-elastic codes FLEX5 and FAST for various test cases. The structural model in MIRAS-FLEX acts to reduce the aerodynamic load computed by MIRAS, particularly near the tip and at high wind...

  15. Probabilistic modeling of timber structures

    DEFF Research Database (Denmark)

    Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro

    2007-01-01

    The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet Publ...... is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet...... and comments from participants of the COST E24 action and the members of the JCSS. The paper contains a description of the basic reference properties for timber strength parameters and ultimate limit state equations for timber components. The recommended probabilistic model for these basic properties...

  16. Computational Model for Internal Relative Humidity Distributions in Concrete

    Directory of Open Access Journals (Sweden)

    Wondwosen Ali

    2014-01-01

    Full Text Available A computational model is developed for predicting nonuniform internal relative humidity distribution in concrete. Internal relative humidity distribution is known to have a direct effect on the nonuniform drying shrinkage strains. These nonuniform drying shrinkage strains result in the buildup of internal stresses, which may lead to cracking of concrete. This may be particularly true at early ages of concrete since the concrete is relatively weak while the difference in internal relative humidity is probably high. The results obtained from this model can be used by structural and construction engineers to predict critical drying shrinkage stresses induced due to differential internal humidity distribution. The model uses finite elment-finite difference numerical methods. The finite element is used to space discretization while the finite difference is used to obtain transient solutions of the model. The numerical formulations are then programmed in Matlab. The numerical results were compared with experimental results found in the literature and demonstrated very good agreement.

  17. Reinforcement Toolbox, a Parametric Reinforcement Modelling Tool for Curved Surface Structures

    NARCIS (Netherlands)

    Lauppe, J.; Rolvink, A.; Coenders, J.L.

    2013-01-01

    This paper presents a computational strategy and parametric modelling toolbox which aim at enhancing the design- and production process of reinforcement in freeform curved surface structures. The computational strategy encompasses the necessary steps of raising an architectural curved surface model

  18. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    Science.gov (United States)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  19. Computational Modeling of Auxin: A Foundation for Plant Engineering

    Directory of Open Access Journals (Sweden)

    Alejandro Morales-Tapia

    2016-12-01

    Full Text Available Since the development of agriculture, humans have relied on the cultivation of plants to satisfy our increasing demand for food, natural products, and other raw materials. As we understand more about plant development, we can better manipulate plants to fulfill our particular needs.Auxins are a class of simple metabolites that coordinate many developmental activities like growth and the appearance of functional structures in plants. Computational modeling of auxin has proven to be an excellent tool in elucidating many mechanisms that underlie these developmental events. Due to the complexity of these mechanisms, current modelling efforts are concerned only with single phenomena focused on narrow spatial and developmental contexts; but a general model of plant development could be assembled by integrating the insights from all of them.In this perspective, we summarize the current collection of auxin-driven computational models, focusing on how they could come together into a single model for plant development. A model of this nature would allow researchers to test hypotheses in silico and yield accurate predictions about the behavior of a plant under a given set of physical and biochemical constraints. It would also provide a solid foundation towards the establishment of plant engineering, a proposed discipline intended to enable the design and production of plants that exhibit an arbitrarily defined set of features.

  20. Kinematics and computation of workspace for adaptive geometry structures

    Science.gov (United States)

    Pourki, Forouza; Sosa, Horacio

    1993-09-01

    A new feature in the design of smart structures is the capability of the structure to respond autonomously to undesirable phenomena and environment. This capability is often synonymous to the requirement that the structure should assume a set of different geometric shapes or adapt to a set of kinematic constraints to accomplish a maneuver. Systems with these characteristics have been referred to as `shape adaptive' or `variable geometry' structures. The present paper introduces a basis for the kinematics and work space studies of statically deterministic truss structures which are shape adaptive. The difference between these structures and the traditional truss structures, which are merely built to support the weight and may be modelled by finite element methods, is the fact that these variable geometry structures allow for large (and nonlinear) deformations. On the other hand, these structures unlike structures composed of well investigated `four bar mechanisms,' are statically deterministic.