Towards distributed multiscale computing for the VPH
Hoekstra, A.G.; Coveney, P.
2010-01-01
Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing o
Development of a Three Dimensional Multiscale Computational Model of the Human Epidermis
Adra, Salem; Sun, Tao; MacNeil, Sheila; Holcombe, Mike; Smallwood, Rod
2010-01-01
Transforming Growth Factor (TGF-β1) is a member of the TGF-beta superfamily ligand-receptor network. and plays a crucial role in tissue regeneration. The extensive in vitro and in vivo experimental literature describing its actions nevertheless describe an apparent paradox in that during re-epithelialisation it acts as proliferation inhibitor for keratinocytes. The majority of biological models focus on certain aspects of TGF-β1 behaviour and no one model provides a comprehensive story of this regulatory factor's action. Accordingly our aim was to develop a computational model to act as a complementary approach to improve our understanding of TGF-β1. In our previous study, an agent-based model of keratinocyte colony formation in 2D culture was developed. In this study this model was extensively developed into a three dimensional multiscale model of the human epidermis which is comprised of three interacting and integrated layers: (1) an agent-based model which captures the biological rules governing the cells in the human epidermis at the cellular level and includes the rules for injury induced emergent behaviours, (2) a COmplex PAthway SImulator (COPASI) model which simulates the expression and signalling of TGF-β1 at the sub-cellular level and (3) a mechanical layer embodied by a numerical physical solver responsible for resolving the forces exerted between cells at the multi-cellular level. The integrated model was initially validated by using it to grow a piece of virtual epidermis in 3D and comparing the in virtuo simulations of keratinocyte behaviour and of TGF-β1 signalling with the extensive research literature describing this key regulatory protein. This research reinforces the idea that computational modelling can be an effective additional tool to aid our understanding of complex systems. In the accompanying paper the model is used to explore hypotheses of the functions of TGF-β1 at the cellular and subcellular level on different keratinocyte
Development of a three dimensional multiscale computational model of the human epidermis.
Salem Adra
Full Text Available Transforming Growth Factor (TGF-beta1 is a member of the TGF-beta superfamily ligand-receptor network. and plays a crucial role in tissue regeneration. The extensive in vitro and in vivo experimental literature describing its actions nevertheless describe an apparent paradox in that during re-epithelialisation it acts as proliferation inhibitor for keratinocytes. The majority of biological models focus on certain aspects of TGF-beta1 behaviour and no one model provides a comprehensive story of this regulatory factor's action. Accordingly our aim was to develop a computational model to act as a complementary approach to improve our understanding of TGF-beta1. In our previous study, an agent-based model of keratinocyte colony formation in 2D culture was developed. In this study this model was extensively developed into a three dimensional multiscale model of the human epidermis which is comprised of three interacting and integrated layers: (1 an agent-based model which captures the biological rules governing the cells in the human epidermis at the cellular level and includes the rules for injury induced emergent behaviours, (2 a COmplex PAthway SImulator (COPASI model which simulates the expression and signalling of TGF-beta1 at the sub-cellular level and (3 a mechanical layer embodied by a numerical physical solver responsible for resolving the forces exerted between cells at the multi-cellular level. The integrated model was initially validated by using it to grow a piece of virtual epidermis in 3D and comparing the in virtuo simulations of keratinocyte behaviour and of TGF-beta1 signalling with the extensive research literature describing this key regulatory protein. This research reinforces the idea that computational modelling can be an effective additional tool to aid our understanding of complex systems. In the accompanying paper the model is used to explore hypotheses of the functions of TGF-beta1 at the cellular and subcellular level on
Xiao, Nan [Department of Bioengineering, Stanford University, Stanford, CA 94305 (United States); Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom); Humphrey, Jay D. [Department of Biomedical Engineering, Yale University, New Haven, CT 06520 (United States); Figueroa, C. Alberto, E-mail: alberto.figueroa@kcl.ac.uk [Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom)
2013-07-01
In this article, we present a computational multi-scale model of fully three-dimensional and unsteady hemodynamics within the primary large arteries in the human. Computed tomography image data from two different patients were used to reconstruct a nearly complete network of the major arteries from head to foot. A linearized coupled-momentum method for fluid–structure-interaction was used to describe vessel wall deformability and a multi-domain method for outflow boundary condition specification was used to account for the distal circulation. We demonstrated that physiologically realistic results can be obtained from the model by comparing simulated quantities such as regional blood flow, pressure and flow waveforms, and pulse wave velocities to known values in the literature. We also simulated the impact of age-related arterial stiffening on wave propagation phenomena by progressively increasing the stiffness of the central arteries and found that the predicted effects on pressure amplification and pulse wave velocity are in agreement with findings in the clinical literature. This work demonstrates the feasibility of three-dimensional techniques for simulating hemodynamics in a full-body compliant arterial network.
Numerical Analysis of Multiscale Computations
Engquist, Björn; Tsai, Yen-Hsi R
2012-01-01
This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.
Zhang, M.; Zhang, Y.; Lichtner, P. C.
2013-12-01
/tailing behavior of the FHM can generally be captured by the HSMs. At all the variances tested, the 8-unit upscaled model is always the most accurate. When the variance is low to moderate, this model can provide accurate to adequate predictions of all the FHM plume moments. In addition, upscaled dispersivities computed with the stochastic versus deterministic techniques yield similar solute predictions, which suggest that in this analysis, an ergodic transport regime has emerged. However, when the variance of ln(k) increases to 4.5, the upscaled dispersivities predicted by the stochastic methods result in significant upstream dispersion that is nonphysical. In this case, the HSMs cannot capture the FHM plume moments for the given ln(K) variance. In summary, simulation results suggest that the upscaling dispersivity can be used to accurately capture solute transport in low ln(K) variance systems but fails to describe the solute motion if system variance is high. Reference: Mingkan Zhang, and Ye Zhang, Multiscale, Multi-variance Dispersivity Upscaling for A Three-Dimensional Hierarchical Aquifer: Developing and Testing a Parallel Random Walk Method with a Drift Term in the Dispersion Tensor, Water Resources Research, in preparation.
Distributed infrastructure for multiscale computing
Zasada, S.J.; Mamonski, M.; Groen, D.; Borgdorff, J.; Saverchenko, I.; Piontek, T.; Kurowski, K.; Coveney, P.V.; Boukerche, A.; Cahill, V.; El-Saddik, A.; Theodoropoulos, G.; Walshe, R.
2012-01-01
Today scientists and engineers are commonly faced with the challenge of modelling, predicting and controlling multiscale systems which cross scientific disciplines and where several processes acting at different scales coexist and interact. Such multidisciplinary multiscale models, when simulated in
Finite Dimensional Approximations for Continuum Multiscale Problems
Berlyand, Leonid [Pennsylvania State Univ., University Park, PA (United States)
2017-01-24
The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed research was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.
Many-Task Computing Tools for Multiscale Modeling
Katz, Daniel S.; Ripeanu, Matei; Wilde, Michael
2011-01-01
This paper discusses the use of many-task computing tools for multiscale modeling. It defines multiscale modeling and places different examples of it on a coupling spectrum, discusses the Swift parallel scripting language, describes three multiscale modeling applications that could use Swift, and then talks about how the Swift model is being extended to cover more of the multiscale modeling coupling spectrum.
Multiscale computation from a chemical engineering perspective
Li Jinghai
2014-01-01
This-paper-mainly-discusses-the-multiscale-computation-from-a-chemical-engineering-perspective.-From-the-application-designer’s-perspective,we-propose-a-new-approach-to-investigate-and-develop-both-flexi-ble-and-efficient-computer-architectures.-Based-on-the-requirements-of-applications-within-one-category,we-first-induce-and-extract-some-inherent-computing-patterns-or-core-computing-kernels-from-the-applications.-Some-computing-models-and-innovative-computing-architectures-will-then-be-developed-for-these-patterns-or-kernels,as-well-as-the-software-mapping-techniques.-Finally-those-applications-which-can-share-and-utilize-those-computing-patterns-or-kernels-can-be-executed-very-efficiently-on-those-novel-computing-architectures.-We-think-that-the-proposed-approach-may-not-be-achievable-within-the-existing-technology.-However,we-believe-that-it-will-be-available-in-the-near-future.-Hence,we-will-describe-this-approach-from-the-following-four-as-pects:multiscale-environment-in-the-world,-mesoscale-as-a-key-scale,-energy-minimization-multiscale-(EMMS)paradigm-and-our-perspective.
Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.
2013-05-01
In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be
Foundations of distributed multiscale computing: formalization, specification, and analysis
Borgdorff, J.; Falcone, J.-L.; Lorenz, E.; Bona-Casas, C.; Chopard, B.; Hoekstra, A.G.
2013-01-01
Inherently complex problems from many scientific disciplines require a multiscale modeling approach. Yet its practical contents remain unclear and inconsistent. Moreover, multiscale models can be very computationally expensive, and may have potential to be executed on distributed infrastructure. In
Multiscale Computing with the Multiscale Modeling Library and Runtime Environment
Borgdorff, J.; Mamonski, M.; Bosak, B.; Groen, D.; Ben Belgacem, M.; Kurowski, K.; Hoekstra, A.G.
2013-01-01
We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We pre
Multi-scale Adaptive Computational Ghost Imaging
Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing
2016-11-01
In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme.
Multiscale modeling and computation of optically manipulated nano devices
Bao, Gang; Liu, Di; Luo, Songting
2016-07-01
We present a multiscale modeling and computational scheme for optical-mechanical responses of nanostructures. The multi-physical nature of the problem is a result of the interaction between the electromagnetic (EM) field, the molecular motion, and the electronic excitation. To balance accuracy and complexity, we adopt the semi-classical approach that the EM field is described classically by the Maxwell equations, and the charged particles follow the Schrödinger equations quantum mechanically. To overcome the numerical challenge of solving the high dimensional multi-component many-body Schrödinger equations, we further simplify the model with the Ehrenfest molecular dynamics to determine the motion of the nuclei, and use the Time-Dependent Current Density Functional Theory (TD-CDFT) to calculate the excitation of the electrons. This leads to a system of coupled equations that computes the electromagnetic field, the nuclear positions, and the electronic current and charge densities simultaneously. In the regime of linear responses, the resonant frequencies initiating the out-of-equilibrium optical-mechanical responses can be formulated as an eigenvalue problem. A self-consistent multiscale method is designed to deal with the well separated space scales. The isomerization of azobenzene is presented as a numerical example.
Multiscale methods for computational RNA enzymology
Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.
2016-01-01
RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472
Computer-Aided Multiscale Modelling for Chemical Process Engineering
Morales Rodriguez, Ricardo; Gani, Rafiqul
2007-01-01
T) for model translation, analysis and solution. The integration of ModDev, MoT and ICAS or any other external software or process simulator (using COM-Objects) permits the generation of different models and/or process configurations for purposes of simulation, design and analysis. Consequently, it is possible......Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...... for model generation, analysis, solution and implementation is necessary for the development and application of the desired model-based approach for product-centric process design/analysis. This goal is achieved through the combination of a system for model development (ModDev), and a modelling tool (Mo...
One dimensional and multiscale models for blood flow circulation
Lamponi, Daniele
2004-01-01
The aim of this work is to provide mathematically sound and computationally effective tools for the numerical simulation of the interaction between fluid and structures as occurring, for instance, in the simulation of the human cardiovascular system. This problem is global, in the sense that local changes can modify the solution far away. From the point of view of computing and modelling this calls for the use of multiscale methods, where simplified models are used to treat the global problem...
Multi-scale analysis of lung computed tomography images
Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C
2007-01-01
A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.
Computational technology of multiscale modeling the gas flows in microchannels
Podryga, V. O.
2016-11-01
The work is devoted to modeling the gas mixture flows in engineering microchannels under conditions of many scales of computational domain. The computational technology of using the multiscale approach combining macro - and microscopic models is presented. At macrolevel the nature of the flow and the external influence on it are considered. As a model the system of quasigasdynamic equations is selected. At microlevel the correction of gasdynamic parameters and the determination of boundary conditions are made. As a numerical model the Newton's equations and the molecular dynamics method are selected. Different algorithm types used for implementation of multiscale modeling are considered. The results of the model problems for separate stages are given.
Cummings, P. T.
2010-02-08
This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.
Computer Laboratory for Multi-scale Simulations of Novel Nanomaterials
2014-09-15
Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students...on sulfonated polystyrene containing block-copolymers, we developed a hierarchical multiscale methodology for computational studies of the membrane...hydrated polyelectrolytes. Three types of hydrated polyelectrolytes were considered Nafion, sulfonated polystyrene (sPS) that forms the hydrophilic
Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
Waveform relaxation for the computational homogenization of multiscale magnetoquasistatic problems
Niyonzima, I.; Geuzaine, C.; Schöps, S.
2016-12-01
This paper proposes the application of the waveform relaxation method to the homogenization of multiscale magnetoquasistatic problems. In the monolithic heterogeneous multiscale method, the nonlinear macroscale problem is solved using the Newton-Raphson scheme. The resolution of many mesoscale problems per Gauß point allows to compute the homogenized constitutive law and its derivative by finite differences. In the proposed approach, the macroscale problem and the mesoscale problems are weakly coupled and solved separately using the finite element method on time intervals for several waveform relaxation iterations. The exchange of information between both problems is still carried out using the heterogeneous multiscale method. However, the partial derivatives can now be evaluated exactly by solving only one mesoscale problem per Gauß point.
Multiscale computer modeling in biomechanics and biomedical engineering
2013-01-01
This book reviews the state-of-the-art in multiscale computer modeling, in terms of both accomplishments and challenges. The information in the book is particularly useful for biomedical engineers, medical physicists and researchers in systems biology, mathematical biology, micro-biomechanics and biomaterials who are interested in how to bridge between traditional biomedical engineering work at the organ and tissue scales, and the newer arenas of cellular and molecular bioengineering.
A principled approach to distributed multiscale computing, from formalization to execution
Borgdorff, J.; Falcone, J.-L.; Lorenz, E.; Chopard, B.; Hoekstra, A.G.
2011-01-01
In several disciplines, a multiscale approach is being used to model complex natural processes yet a principled background to multiscale modeling is not clear. Additionally, some multiscale models requiring distributed resources to be computed in an acceptable timeframe, while no standard framework
Three-Dimensional Multiscale MHD Model of Cometary Plasma Environments
Gombosi, Tamas I.; DeZeeuw, Darren L.; Haberli, Roman M.; Powell, Kenneth G.
1996-01-01
First results of a three-dimensional multiscale MHD model of the interaction of an expanding cometary atmosphere with the magnetized solar wind are presented. The model starts with a supersonic and super-Alfvenic solar wind far upstream of the comet (25 Gm upstream of the nucleus) with arbitrary interplanetary magnetic field orientation. The solar wind is continuously mass loaded with cometary ions originating from a 10-km size nucleus. The effects of photoionization, electron impact ionization, recombination, and ion-neutral frictional drag are taken into account in the model. The governing equations are solved on an adaptively refined unstructured Cartesian grid using our new multiscale upwind scalar conservation laws-type numerical technique (MUSCL). We have named this the multiscale adaptive upwind scheme for MHD (MAUS-MHD). The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the diamagnetic cavity of the comet. The main findings are the following: (1) Mass loading decelerates the solar wind flow upstream of the weak cometary shock wave (M approximately equals 2, M(sub A) approximately equals 2), which forms at a subsolar standoff distance of about 0.35 Gm. (2) A cometary plasma cavity is formed at around 3 x 10(exp 3) km from the nucleus. Inside this cavity the plasma expands outward due to the frictional interaction between ions and neutrals. On the nightside this plasma cavity considerably narrows and a relatively fast and dense cometary plasma beam is ejected into the tail. (3) Inside the plasma cavity a teardrop-shaped inner shock is formed, which is terminated by a Mach disk on the nightside. Only the region inside the inner shock is the 'true' diamagnetic cavity. (4) The model predicts four distinct current systems in the inner coma: the density peak current, the cavity boundary current, the inner shock current, and finally the cross-tail current
Multiscale imaging and computational modeling of blood flow in the tumor vasculature.
Kim, Eugene; Stamatelos, Spyros; Cebulla, Jana; Bhujwalla, Zaver M; Popel, Aleksander S; Pathak, Arvind P
2012-11-01
The evolution in our understanding of tumor angiogenesis has been the result of pioneering imaging and computational modeling studies spanning the endothelial cell, microvasculature and tissue levels. Many of these primary data on the tumor vasculature are in the form of images from pre-clinical tumor models that provide a wealth of qualitative and quantitative information in many dimensions and across different spatial scales. However, until recently, the visualization of changes in the tumor vasculature across spatial scales remained a challenge due to a lack of techniques for integrating micro- and macroscopic imaging data. Furthermore, the paucity of three-dimensional (3-D) tumor vascular data in conjunction with the challenges in obtaining such data from patients presents a serious hurdle for the development and validation of predictive, multiscale computational models of tumor angiogenesis. In this review, we discuss the development of multiscale models of tumor angiogenesis, new imaging techniques capable of reproducing the 3-D tumor vascular architecture with high fidelity, and the emergence of "image-based models" of tumor blood flow and molecular transport. Collectively, these developments are helping us gain a fundamental understanding of the cellular and molecular regulation of tumor angiogenesis that will benefit the development of new cancer therapies. Eventually, we expect this exciting integration of multiscale imaging and mathematical modeling to have widespread application beyond the tumor vasculature to other diseases involving a pathological vasculature, such as stroke and spinal cord injury.
Multiscale Computer Simulation of Failure in Aerogels
Good, Brian S.
2008-01-01
Aerogels have been of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While such gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. We have previously performed computer simulations of aerogel thermal conductivity and tensile and compressive failure, with results that are in qualitative, and sometimes quantitative, agreement with experiment. However, recent experiments in our laboratory suggest that gels having similar densities may exhibit substantially different properties. In this work, we extend our original diffusion limited cluster aggregation (DLCA) model for gel structure to incorporate additional variation in DLCA simulation parameters, with the aim of producing DLCA clusters of similar densities that nevertheless have different fractal dimension and secondary particle coordination. We perform particle statics simulations of gel strain on these clusters, and consider the effects of differing DLCA simulation conditions, and the resultant differences in fractal dimension and coordination, on gel strain properties.
Integrated Multiscale Modeling of Molecular Computing Devices
Jerzy Bernholc
2011-02-03
Nanoscience has been one of the major research focuses of the U.S. and much of the world for the past decade, in part because of its promise to revolutionize many fields, including materials, medicine, and electronics. At the heart of this promise is the fact that nanostructured materials can behave radically differently than their macroscopic counterparts (e.g., bulk gold is such an inert metal that it has found applications in such diverse fields as jewelry, biomedical implants and dentistry, whereas gold nanoparticles are highly reactive and are thus useful as nanocatalysts) and have properties that are tunable due to a strong dependence on the size and surface area of the nanostructure. Thus, nanoscience offers a remarkable opportunity to develop new functional systems built around nanostructured materials with unusual and tunable properties and functionality. The transition from nanoscience to nanotechnology becomes possible when nanostructured systems can be made reproducibly by processes that can be implemented on a large scale. The microelectronics industry is one example of an industry that has evolved into the realm of nanotechnology, since the exponential reduction in feature size in computer chips has resulted in feature sizes now under 50nm (45nm in production, 32nm demonstrated; feature size has been going down by a factor of approximately 1/{radical}2 every 18 months as chip density has doubled every 18 months according to Moore's law). Silicon-based microelectronics relies on etching features into a single-crystal silicon substrate by photolithography. As the feature size of silicon-based microelectronics continues to decrease, the continuation of Moore's law to below 20nm feature sizes is being questioned, due to limitations in both the physics of the transistors (leading to unacceptable power dissipation) and doubts about the scalability of top-down photolithography-based manufacturing to such small sizes. There is no doubt that
Integrated Multiscale Modeling of Molecular Computing Devices
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.
Multiscale Computation. Needs and Opportunities for BER Science
Scheibe, Timothy D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smith, Jeremy C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-01-01
The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSL decisions regarding future computational (hardware and software) architectures.
Multiscale analysis and computation for flows in heterogeneous media
Efendiev, Yalchin [Texas A & M Univ., College Station, TX (United States); Hou, T. Y. [California Inst. of Technology (CalTech), Pasadena, CA (United States); Durlofsky, L. J. [Stanford Univ., CA (United States); Tchelepi, H. [Stanford Univ., CA (United States)
2016-08-04
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scale basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.
Computer aided polymer design using multi-scale modelling
K. C. Satyanarayana
2010-09-01
Full Text Available The ability to predict the key physical and chemical properties of polymeric materials from their repeat-unit structure and chain-length architecture prior to synthesis is of great value for the design of polymer-based chemical products, with new functionalities and improved performance. Computer aided molecular design (CAMD methods can expedite the design process by establishing input-output relations between the type and number of functional groups in a polymer repeat unit and the desired macroscopic properties. A multi-scale model-based approach that combines a CAMD technique based on group contribution plus models for predicting polymer repeat unit properties with atomistic simulations for providing first-principles arrangements of the repeat units and for predictions of physical properties of the chosen candidate polymer structures, has been developed and tested for design of polymers with desired properties. A case study is used to highlight the main features of this multi-scale model-based approach for the design of a polymer-based product.
Application of computer-aided multi-scale modelling framework - Aerosol case study
Heitzig, Martina; Gregson, Christopher; Sin, Gürkan;
2011-01-01
A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets...
Mazzi, Giacomo; Samaey, Giovanni
2012-01-01
In this paper, we present a study on how to develop an efficient multiscale simulation strategy for the dynamics of chemically active systems on low-dimensional supports. Such reactions are encountered in a wide variety of situations, ranging from heterogeneous catalysis to electrochemical or (membrane) biological processes, to cite a few. We analyzed in this context different techniques within the framework of an important multiscale approach known as the equation free method (EFM), which "bridges the multiscale gap" by building microscopic configurations using macroscopic-level information only. We hereby considered two simple reactive processes on a one-dimensional lattice, the simplicity of which allowed for an in-depth understanding of the parameters controlling the efficiency of this approach. We demonstrate in particular that it is not enough to base the EFM on the time evolution of the average concentrations of particles on the lattice, but that the time evolution of clusters of particles has to be in...
Romeny, Bart M Haar
2008-01-01
Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective
Higher-Dimensional Signal Processing via Multiscale Geometric Analysis
2010-02-10
1.1 Review of motivation Over the past twenty years multiscale methods like the discrete wavelet transform (DWT) have revolutionized signal processing...sparsity and structure boost the performance of wavelet -domain statistical models and enable simple yet powerful algorithms for estimation/ denoising ...for many state-of-the-art wavelet domain processing algorithms for applications including compression [23,24], denoising [25,26], and segmentation [27
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
Zabaras, Nicolas J. [Cornell Univ., Ithaca, NY (United States)
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
A computational library for multiscale modeling of material failure
Talebi, Hossein; Silani, Mohammad; Bordas, Stéphane P. A.; Kerfriden, Pierre; Rabczuk, Timon
2014-05-01
We present an open-source software framework called PERMIX for multiscale modeling and simulation of fracture in solids. The framework is an object oriented open-source effort written primarily in Fortran 2003 standard with Fortran/C++ interfaces to a number of other libraries such as LAMMPS, ABAQUS, LS-DYNA and GMSH. Fracture on the continuum level is modeled by the extended finite element method (XFEM). Using several novel or state of the art methods, the piece software handles semi-concurrent multiscale methods as well as concurrent multiscale methods for fracture, coupling two continuum domains or atomistic domains to continuum domains, respectively. The efficiency of our open-source software is shown through several simulations including a 3D crack modeling in clay nanocomposites, a semi-concurrent FE-FE coupling, a 3D Arlequin multiscale example and an MD-XFEM coupling for dynamic crack propagation.
Multiscale Pressure-Balanced Structures in Three-Dimensional Magnetohydrodynamic Turbulence
Yang, Liping; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Zhang, Lei; Marsch, Eckart; Wang, Linghua; Wang, Xin; Feng, Xueshang
2017-02-01
Observations of solar wind turbulence indicate the existence of multiscale pressure-balanced structures (PBSs) in the solar wind. In this work, we conduct a numerical simulation to investigate multiscale PBSs and in particular their formation in compressive magnetohydrodynamic turbulence. By the use of the higher-order Godunov code Athena, a driven compressible turbulence with an imposed uniform guide field is simulated. The simulation results show that both the magnetic pressure and the thermal pressure exhibit a turbulent spectrum with a Kolmogorov-like power law, and that in many regions of the simulation domain they are anticorrelated. The computed wavelet cross-coherence spectra of the magnetic pressure and the thermal pressure, as well as their space series, indicate the existence of multiscale PBSs, with the small PBSs being embedded in the large ones. These multiscale PBSs are likely to be related to the highly oblique-propagating slow-mode waves, as the traced multiscale PBS is found to be traveling in a certain direction at a speed consistent with that predicted theoretically for a slow-mode wave propagating in the same direction.
Discriminating image textures with the multiscale two-dimensional complexity-entropy causality plane
Zunino, Luciano
2016-01-01
The aim of this paper is to further explore the usefulness of the two-dimensional complexity-entropy causality plane as a texture image descriptor. A multiscale generalization is introduced in order to distinguish between different roughness features of images at small and large spatial scales. Numerically generated two-dimensional structures are initially considered for illustrating basic concepts in a controlled framework. Then, more realistic situations are studied. Obtained results allow us to confirm that intrinsic spatial correlations of images are successfully unveiled by implementing this multiscale symbolic information-theory approach. Consequently, we conclude that the proposed representation space is a versatile and practical tool for identifying, characterizing and discriminating image textures.
2nd International Conference on Multiscale Computational Methods for Solids and Fluids
2016-01-01
This volume contains the best papers presented at the 2nd ECCOMAS International Conference on Multiscale Computations for Solids and Fluids, held June 10-12, 2015. Topics dealt with include multiscale strategy for efficient development of scientific software for large-scale computations, coupled probability-nonlinear-mechanics problems and solution methods, and modern mathematical and computational setting for multi-phase flows and fluid-structure interaction. The papers consist of contributions by six experts who taught short courses prior to the conference, along with several selected articles from other participants dealing with complementary issues, covering both solid mechanics and applied mathematics. .
Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J
2014-01-01
The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work.
Bender, Jason D.
Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for
Adaptive multi-scale parameterization for one-dimensional flow in unsaturated porous media
Hayek, Mohamed; Lehmann, François; Ackerer, Philippe
2008-01-01
In the analysis of the unsaturated zone, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the porous media. Adaptative multi-scale parameterization consists in solving the problem through successive approximations by refining the parameter at the next finer scale all over the domain and stopping the process when the refinement does not induce significant decrease of the objective function any more. In this context, the refinement indicators algorithm provides an adaptive parameterization technique that opens the degrees of freedom in an iterative way driven at first order by the model to locate the discontinuities of the sought parameters. We present a refinement indicators algorithm for adaptive multi-scale parameterization that is applicable to the estimation of multi-dimensional hydraulic parameters in unsaturated soil water flow. Numerical examples are presented which show the efficiency of the algorithm in case of noisy data and missing data.
Jagiella, Nick; Rickert, Dennis; Theis, Fabian J; Hasenauer, Jan
2017-02-22
Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem. Here, we present and benchmark a parallel approximate Bayesian computation sequential Monte Carlo (pABC SMC) algorithm, tailored for high-performance computing clusters. pABC SMC is fully automated and returns reliable parameter estimates and confidence intervals. By running the pABC SMC algorithm for ∼10(6) hr, we parameterize multi-scale models that accurately describe quantitative growth curves and histological data obtained in vivo from individual tumor spheroid growth in media droplets. The models capture the hybrid deterministic-stochastic behaviors of 10(5)-10(6) of cells growing in a 3D dynamically changing nutrient environment. The pABC SMC algorithm reliably converges to a consistent set of parameters. Our study demonstrates a proof of principle for robust, data-driven modeling of multi-scale biological systems and the feasibility of multi-scale model parameterization through statistical inference.
Multiscale Gentlest Ascent Dynamics
Zhou, Xiang
2016-01-01
The gentlest ascent dynamics (E and Zhou in {\\it Nonlinearity} vol 24, p1831, 2011) locally converges to a nearby saddle point with one dimensional unstable manifold. Here we present a multiscale gentlest ascent dynamics for stochastic slow-fast systems in order to compute saddle point associated with the effective dynamics of the slow variable. Such saddle points, as the candidates of transition states, are important in non-equilibrium transitions for the coarse-grained slow variables; they are also helpful to explore free energy surface. We derive the expressions of the gentlest ascent dynamics for the averaged system, and propose the multiscale numerical methods to efficiently solve the multiscale gentlest ascent dynamics for search of saddle point. The examples of stochastic ordinary and partial differential equations are presented to illustrate the performance of this multiscale gentlest ascent dynamics.
Whey-Fone Tsai
2011-12-01
Full Text Available Taiwan frequently experiences natural disasters such as typhoons, floods, landslides, debris flows, and earthquakes. Therefore, the instant acquisition of high-definition images and topographic or spatial data of affected areas as disasters occur is crucial for disaster response teams and making emergency aid decisions. The National Applied Research Laboratories has implemented the project “development of near real-time, high-resolution, global earth observation 3D platform for applications to environmental monitoring and disaster mitigation.” This developmental project integrates earth observation techniques, data warehousing, high-performance visualization displays, grids, and disaster prevention technology to establish a near real-time high-resolution three-dimensional (3D disaster prevention earth observation application platform for Taiwan. The main functions of this platform include (1 integration of observation information, such as Formosat-2 satellite remote sensing, aerial photography, and 3D photography of disaster sites, to provide multidimensional information of the conditions at the affected sites; (2 disaster prevention application technologies, such as large-sized high-resolution 3D projection system, medium-sized active stereo projection systems, and small-sized personal computers with multiscale 3D display systems; (3 a 3D geographical information network platform that integrates data warehousing and cloud services, complies with the Open Geospatial Consortium (OGC international standard for image data exchange and release processes, and includes image overlaying and added-value analysis of disasters; and (4 near real-time and automated simulation of image processing procedures, which accelerates orthophoto processing once raw data are received from satellites and provides appropriate images for disaster prevention decision-making within 3 to 6 h. This study uses the 88 Flood event of Typhoon Morakot in 2009, Typhoon Fanapi
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Glotzer, Sharon C.
2013-08-28
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Computational Dimensionalities of Global Supercomputing
Richard S. Segall
2013-12-01
Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by
A Dimensionality Reduction Framework for Detection of Multiscale Structure in Heterogeneous Networks
Hua-Wei Shen; Xue-Qi Cheng; Yuan-Zhuo Wang; Yi-xin Chen
2012-01-01
Graph clustering has been widely applied in exploring regularities emerging in relational data.Recently,the rapid development of network theory correlates graph clustering with the detection of community structure,a common and important topological characteristic of networks.Most existing methods investigate the community structure at a single topological scale.However,as shown by empirical studies,the community structure of real world networks often exhibits multiple topological descriptions,corresponding to the clustering at different resolutions.Furthermore,the detection of multiscale community structure is heavily affected by the heterogeneous distribution of node degree.It is very challenging to detect multiscale community structure in heterogeneous networks.In this paper,we propose a novel,unified framework for detecting community structure from the perspective of dimensionality reduction.Based on the framework,we first prove that the well-known Laplacian matrix for network partition and the widely-used modularity matrix for community detection are two kinds of covariaace matrices used in dimensionality reduction. We then propose a novel method to detect communities at multiple topological scales within our framework.We further show that existing algorithms fail to deal with heterogeneous node degrees.We develop a novel method to handle heterogeneity of networks by introducing a rescaling transformation into the covariance matrices in our framework.Extensive tests on real world and artificial networks demonstrate that the proposed correlation matrices significantly outperform Laplacian and modularity matrices in terms of their ability to identify multiscale community structure in heterogeneous networks.
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-10-01
Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.
Xi F. XU
2015-01-01
The Green-function-based multiscale stochastic finite element method （MSFEM） has been formulated based on the stochastic variational principle. In this study a fast computing procedure based on the MSFEM is developed to solve random field geotechnical problems with a typical coefficient of variance less than 1. A unique fast computing advantage of the procedure enables computation performed only on those locations of interest, therefore saving a lot of computation. The numerical example on soil settlement shows that the procedure achieves significant computing efficiency compared with Monte Carlo method.
Cummings, Peter [Vanderbilt University
2009-11-15
The document is the final report of the DOE Computational Nanoscience Project DE-FG02-03ER46096: Integrated Multiscale Modeling of Molecular Computing Devices. It included references to 62 publications that were supported by the grant.
Mertens, J C E; Williams, J J; Chawla, Nikhilesh
2014-01-01
The design and construction of a high resolution modular x-ray computed tomography (XCT) system is described. The approach for meeting a specified set of performance goals tailored toward experimental versatility is highlighted. The instrument is unique in its detector and x-ray source configuration, both of which enable elevated optimization of spatial and temporal resolution. The process for component selection is provided. The selected components are specified, the custom component design discussed, and the integration of both into a fully functional XCT instrument is outlined. The novelty of this design is a new lab-scale detector and imaging optimization through x-ray source and detector modularity.
Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.
Dao, Tien Tuan
2017-06-01
Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.
Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale
Kevrekidis, Ioannis [Princeton Univ., NJ (United States)
2017-03-22
The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating the scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).
FENG Yongping; CUI Junzhi
2004-01-01
In this paper, the multi-scale computational method for a structure of composite materials with a small periodic configuration under the coupled thermoelasticity condition is presented.The two-scale asymptotic (TSA) expression of the displacement and the increment of temperature for composite materials with a small periodic configuration under the condition of thermoelasticity are briefly shown at first, then the multi-scale finite element algorithms based on TSA are discussed. Finally the numerical results evaluated by the multi-scale computational method are shown. It demonstrates that the basic configuration and the increment of temperature strongly influence the local strains and local stresses inside a basic cell.
Glacial landscape evolution by subglacial quarrying: A multiscale computational approach
Ugelvig, Sofie V.; Egholm, David L.; Iverson, Neal R.
2016-11-01
Quarrying of bedrock is a primary agent of subglacial erosion. Although the mechanical theory behind the process has been studied for decades, it has proven difficult to formulate the governing principles so that large-scale landscape evolution models can be used to integrate erosion over time. The existing mechanical theory thus stands largely untested in its ability to explain postglacial topography. In this study we relate the physics of quarrying to long-term landscape evolution with a multiscale approach that connects meter-scale cavities to kilometer-scale glacial landscapes. By averaging the quarrying rate across many small-scale bedrock steps, we quantify how regional trends in basal sliding speed, effective pressure, and bed slope affect the rate of erosion. A sensitivity test indicates that a power law formulated in terms of these three variables provides an acceptable basis for quantifying regional-scale rates of quarrying. Our results highlight the strong influence of effective pressure, which intensifies quarrying by increasing the volume of the bed that is stressed by the ice and thereby the probability of rock failure. The resulting pressure dependency points to subglacial hydrology as a primary factor for influencing rates of quarrying and hence for shaping the bedrock topography under warm-based glaciers. When applied in a landscape evolution model, the erosion law for quarrying produces recognizable large-scale glacial landforms: U-shaped valleys, hanging valleys, and overdeepenings. The landforms produced are very similar to those predicted by more standard sliding-based erosion laws, but overall quarrying is more focused in valleys, and less effective at higher elevations.
Knap, J; Spear, C E; Borodin, O; Leiter, K W
2015-10-30
We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.
3D多尺度几何分析研究进展%Advances in Three-Dimensional Multiscale Geometrical Analysis
宋传鸣; 赵长伟; 刘丹; 王相海
2015-01-01
Three-Dimensional (3D) multiscale geometrical analysis is the technological fundamental for the processing of digital visual media, such as images, videos, and geometrical models. Its objective is to efficiently represent the point singularity, curve singularity, as well as surface singularity presented in those visual media. This study first reviews the research advances in two-dimensional (2D) multiscale geometrical analysis. It then elaborates on the development of 3D multiscale geometrical analysis for video according to the capability evolution in capturing singularity and nonlinear approximation efficiency improvement of various transforms. State-of-the-Art 3D multiscale geometrical analysis is classified into three categories: the extended multiscale geometrical analysis from 2D basis functions, the multiscale geometrical analysis based on 3D basis function, and the multiscale geometrical analysis based on spatiotemporal non-local correlation. The basic ideas of typical transforms are thoroughly discussed subsequently, and so are their nonlinear approximation efficiency, computational complexity, advantages, and disadvantages. Meanwhile, this study also presents a general review on the development of the 3D multiscale geometrical analysis for geometrical models. Based on the study above, the development trend of the 3D multiscale geometrical analysis is forecast in the near future.%3D多尺度几何分析是图像、视频和几何模型等数字可视媒体处理的技术基础,其目的在于高效地表示这些媒体中存在的点、线、面奇异.为此,依据不同变换捕获奇异的能力演进及其非线性逼近效率的提高,从 2D 图像多尺度几何分析的研究进展切入,着重阐述视频3D多尺度几何分析的发展,并将其归纳为3类:由2D基函数直接扩展的3D多尺度几何分析、基于3D基函数的3D多尺度几何分析和基于时空非局部相关性的3D多尺度几何分析,深入探讨了各种典型变换
Computed tomography for dimensional metrology
Kruth, J.P.; Bartscher, M.; Carmignato, S.;
2011-01-01
metrology, putting emphasis on issues as accuracy, traceability to the unit of length (the meter) and measurement uncertainty. It provides a state of the art (anno 2011) and application examples, showing the aptitude of CT metrology to: (i) check internal dimensions that cannot be measured using traditional...... coordinate measuring machines and (ii) combine dimensional quality control with material quality control in one single quality inspection run....
Multi-scale coupling strategy for fully two-dimensional and depth-averaged models for granular flows
Pudasaini, Shiva P.; Domnik, Birte; Miller, Stephen A.
2013-04-01
We developed a full two-dimensional Coulomb-viscoplastic model and applied it for inclined channel flows of granular materials from initiation to their deposition. The model includes the basic features and observed phenomena in dense granular flows like the exhibition of a yield strength and a non-zero slip velocity. A pressure-dependent yield strength is proposed to account for the frictional nature of granular materials. The yield strength can be related to the internal friction angle of the material and plays an important role, for example, in deposition processes. The interaction of the flow with the solid boundary is modelled by a pressure and rate-dependent Coulomb-viscoplastic sliding law. We developed an innovative multi-scale strategy to couple the full two-dimensional, non depth-averaged model (N-DAM) with a one-dimensional, depth-averaged model (DAM). The coupled model reduces computational complexity dramatically by using DAM only in regions with smooth changes of flow variables. The numerics uses N-DAM in regions where depth-averaging becomes inaccurate, for instance, in the initiation and deposition regions, and (particularly) when the flow hits an obstacle or a defense structure. In these regions, momentum transfer must be, and is, considered in all directions. We observe very high coupling performance, and show that the numerical results deviate only slightly from results of the much more cumbersome full two-dimensional model. This shows that the coupled model, which retains all the basic physics of the flow, is an attractive alternative to an expensive, full two-dimensional simulations. We compare simulation results with different experimental data for shock waves appearing in rapid granular flows down inclined channels and impacting a wall. The model predicts the evolution of the strong shock wave and the impact force on a rigid wall for different inclination angles and sliding surfaces. It is demonstrated that the internal friction angle plays an
Patel, Deepak K; Waas, Anthony M
2016-07-13
This paper is concerned with predicting the progressive damage and failure of multi-layered hybrid textile composites subjected to uniaxial tensile loading, using a novel two-scale computational mechanics framework. These composites include three-dimensional woven textile composites (3DWTCs) with glass, carbon and Kevlar fibre tows. Progressive damage and failure of 3DWTCs at different length scales are captured in the present model by using a macroscale finite-element (FE) analysis at the representative unit cell (RUC) level, while a closed-form micromechanics analysis is implemented simultaneously at the subscale level using material properties of the constituents (fibre and matrix) as input. The N-layers concentric cylinder (NCYL) model (Zhang and Waas 2014 Acta Mech. 225, 1391-1417; Patel et al. submitted Acta Mech.) to compute local stress, srain and displacement fields in the fibre and matrix is used at the subscale. The 2-CYL fibre-matrix concentric cylinder model is extended to fibre and (N-1) matrix layers, keeping the volume fraction constant, and hence is called the NCYL model where the matrix damage can be captured locally within each discrete layer of the matrix volume. The influence of matrix microdamage at the subscale causes progressive degradation of fibre tow stiffness and matrix stiffness at the macroscale. The global RUC stiffness matrix remains positive definite, until the strain softening response resulting from different failure modes (such as fibre tow breakage, tow splitting in the transverse direction due to matrix cracking inside tow and surrounding matrix tensile failure outside of fibre tows) are initiated. At this stage, the macroscopic post-peak softening response is modelled using the mesh objective smeared crack approach (Rots et al. 1985 HERON 30, 1-48; Heinrich and Waas 2012 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, 23-26 April 2012 AIAA 2012-1537). Manufacturing
Ofiţeru, I.D.; Bellucci, M.; Picioreanu, C.; Lavric, V.; Curtis, T.P.
2013-01-01
A simple “first generation” multi-scale computational model of the formation of activated sludge flocs at micro-scale and reactor performance at macro-scale is proposed. The model couples mass balances for substrates and biomass at reactor scale with an individual-based approach for the floc morphol
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner
2010-05-24
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure
Multiscale analysis of nonlinear systems using computational homology
Konstantin Mischaikow, Rutgers University/Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University
2010-05-19
This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure
Enveloped viruses understood via multiscale simulation: computer-aided vaccine design
Shreif, Z.; Adhangale, P.; Cheluvaraja, S.; Perera, R.; Kuhn, R.; Ortoleva, P.
Enveloped viruses are viewed as an opportunity to understand how highly organized and functional biosystems can emerge from a collection of millions of chaotically moving atoms. They are an intermediate level of complexity between macromolecules and bacteria. They are a natural system for testing theories of self-assembly and structural transitions, and for demonstrating the derivation of principles of microbiology from laws of molecular physics. As some constitute threats to human health, a computer-aided vaccine and drug design strategy that would follow from a quantitative model would be an important contribution. However, current molecular dynamics simulation approaches are not practical for modeling such systems. Our multiscale approach simultaneously accounts for the outer protein net and inner protein/genomic core, and their less structured membranous material and host fluid. It follows from a rigorous multiscale deductive analysis of laws of molecular physics. Two types of order parameters are introduced: (1) those for structures wherein constituent molecules retain long-lived connectivity (they specify the nanoscale structure as a deformation from a reference configuration) and (2) those for which there is no connectivity but organization is maintained on the average (they are field variables such as mass density or measures of preferred orientation). Rigorous multiscale techniques are used to derive equations for the order parameters dynamics. The equations account for thermal-average forces, diffusion coefficients, and effects of random forces. Statistical properties of the atomic-scale fluctuations and the order parameters are co-evolved. By combining rigorous multiscale techniques and modern supercomputing, systems of extreme complexity can be modeled.
Computational multiscale modeling of fluids and solids theory and applications
Steinhauser, Martin Oliver
2017-01-01
The idea of the book is to provide a comprehensive overview of computational physics methods and techniques, that are used for materials modeling on different length and time scales. Each chapter first provides an overview of the basic physical principles which are the basis for the numerical and mathematical modeling on the respective length-scale. The book includes the micro-scale, the meso-scale and the macro-scale, and the chapters follow this classification. The book explains in detail many tricks of the trade of some of the most important methods and techniques that are used to simulate materials on the perspective levels of spatial and temporal resolution. Case studies are included to further illustrate some methods or theoretical considerations. Example applications for all techniques are provided, some of which are from the author’s own contributions to some of the research areas. The second edition has been expanded by new sections in computational models on meso/macroscopic scales for ocean and a...
A hybrid multi-scale computational scheme for advection-diffusion-reaction equation
Karimi, S.; Nakshatrala, K. B.
2016-12-01
Simulation of transport and reaction processes in porous media and subsurface science has become more vital than ever. Over the past few decades, a variety of mathematical models and numerical methodologies for porous media simulations have been developed. As the demand for higher accuracy and validity of the models grows, the issue of disparate temporal and spatial scales becomes more problematic. The variety of reaction processes and complexity of pore geometry poses a huge computational burden in a real-world or reservoir scale simulation. Meanwhile, methods based on averaging or up- scaling techniques do not provide reliable estimates to pore-scale processes. To overcome this problem, development of hybrid and multi-scale computational techniques is considered a promising approach. In these methods, pore-scale and continuum-scale models are combined, hence, a more reliable estimate to pore-scale processes is obtained without having to deal with the tremendous computational overhead of pore-scale methods. In this presentation, we propose a computational framework that allows coupling of lattice Boltzmann method (for pore-scale simulation) and finite element method (for continuum-scale simulation) for advection-diffusion-reaction equations. To capture disparate in time and length events, non-matching grid and time-steps are allowed. Apart from application of this method to benchmark problems, multi-scale simulation of chemical reactions in porous media is also showcased.
Multiscale mechanobiology: computational models for integrating molecules to multicellular systems.
Mak, Michael; Kim, Taeyoon; Zaman, Muhammad H; Kamm, Roger D
2015-10-01
Mechanical signals exist throughout the biological landscape. Across all scales, these signals, in the form of force, stiffness, and deformations, are generated and processed, resulting in an active mechanobiological circuit that controls many fundamental aspects of life, from protein unfolding and cytoskeletal remodeling to collective cell motions. The multiple scales and complex feedback involved present a challenge for fully understanding the nature of this circuit, particularly in development and disease in which it has been implicated. Computational models that accurately predict and are based on experimental data enable a means to integrate basic principles and explore fine details of mechanosensing and mechanotransduction in and across all levels of biological systems. Here we review recent advances in these models along with supporting and emerging experimental findings.
Multiscale and multimodality computed tomography for cortical bone analysis
Ostertag, A.; Peyrin, F.; Gouttenoire, P. J.; Laredo, J. D.; DeVernejoul, M. C.; Cohen Solal, M.; Chappard, C.
2016-12-01
In clinical studies, high resolution peripheral quantitative computed tomography (HR-pQCT) is used to separately evaluate cortical bone and trabecular bone with an isotropic voxel of 82 µm3, and typical cortical parameters are cortical density (D.comp), thickness (Ct.Th), and porosity (Ct.Po). In vitro, micro-computed tomography (micro-CT) is used to explore the internal cortical bone micro-structure with isotropic voxels and high resolution synchrotron radiation (SR); micro-CT is considered the ‘gold standard’. In 16 tibias and 8 femurs, HR-pQCT measurements were compared to conventional micro-CT measurements. To test modality effects, conventional micro-CT measurements were compared to SR micro-CT measurements at 7.5 µm3 SR micro-CT measurements were also tested at different voxel sizes for the femurs, specifically, 7.5 µm3 versus 2.8 µm3. D.comp (r = -0.88, p images provided consistent results compared to those obtained using conventional micro-CT at the distal tibia. D.comp was highly correlated to Po.V/TV because it considers both the micro-porosity (Haversian systems) and macro-porosity (resorption lacunae) of cortical bone. The complexity of canal organization, (including shape, connectivity, and surface) are not fully considered in conventional micro-CT in relation to beam hardening and cone beam reconstruction artifacts. With the exception of Po.V/TV measurements, morphological and topological measurements depend on the characteristics of the x-ray beam, and to a lesser extent, on image resolution.
Ilday, Serim; Ilday, F Ömer; Hübner, René; Prosa, Ty J; Martin, Isabelle; Nogay, Gizem; Kabacelik, Ismail; Mics, Zoltan; Bonn, Mischa; Turchinovich, Dmitry; Toffoli, Hande; Toffoli, Daniele; Friedrich, David; Schmidt, Bernd; Heinig, Karl-Heinz; Turan, Rasit
2016-03-09
Multiscale self-assembly is ubiquitous in nature but its deliberate use to synthesize multifunctional three-dimensional materials remains rare, partly due to the notoriously difficult problem of controlling topology from atomic to macroscopic scales to obtain intended material properties. Here, we propose a simple, modular, noncolloidal methodology that is based on exploiting universality in stochastic growth dynamics and driving the growth process under far-from-equilibrium conditions toward a preplanned structure. As proof of principle, we demonstrate a confined-but-connected solid structure, comprising an anisotropic random network of silicon quantum-dots that hierarchically self-assembles from the atomic to the microscopic scales. First, quantum-dots form to subsequently interconnect without inflating their diameters to form a random network, and this network then grows in a preferential direction to form undulated and branching nanowire-like structures. This specific topology simultaneously achieves two scale-dependent features, which were previously thought to be mutually exclusive: good electrical conduction on the microscale and a bandgap tunable over a range of energies on the nanoscale.
Barkaoui, Abdelwahed; Tarek, Merzouki; Hambli, Ridha; Ali, Mkaddem
2014-01-01
The complexity and heterogeneity of bone tissue require a multiscale modelling to understand its mechanical behaviour and its remodelling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network computation and homogenisation equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained neural network simulation. Finite element (FE) calculation is performed at nanoscopic levels to provide a database to train an in-house neural network program; (iii) in steps 2 to 10 from fibril to continuum cortical bone tissue, homogenisation equations are used to perform the computation at the higher s...
COMPUTATIONAL FLUID DYNAMICS FOR DENSE GAS-SOLID FLUIDIZED BEDS: A MULTI-SCALE MODELING STRATEGY
M.; A.; van; der; Hoef; M.; van; Sint; Annaland; J.; A.; M.; Kuipers
2005-01-01
Dense gas-particle flows are encountered in a variety of industrially important processes for large scale production of fuels, fertilizers and base chemicals. The scale-up of these processes is often problematic and is related to the intrinsic complexities of these flows which are unfortunately not yet fully understood despite significant efforts made in both academic and industrial research laboratories. In dense gas-particle flows both (effective) fluid-particle and (dissipative) particle-particle interactions need to be accounted for because these phenomena to a large extent govern the prevailing flow phenomena, i.e. the formation and evolution of heterogeneous structures. These structures have significant impact on the quality of the gas-solid contact and as a direct consequence thereof strongly affect the performance of the process. Due to the inherent complexity of dense gas-particles flows, we have adopted a multi-scale modeling approach in which both fluid-particle and particle-particle interactions can be properly accounted for. The idea is essentially that fundamental models, taking into account the relevant details of fluid-particle (lattice Boltzmann model) and particle-particle (discrete particle model) interactions, are used to develop closure laws to feed continuum models which can be used to compute the flow structures on a much larger (industrial) scale. Our multi-scale approach (see Fig. 1 ) involves the lattice Boltzmann model, the discrete particle model, the continuum model based on the kinetic theory of granular flow,and the discrete bubble model. In this paper we give an overview of the multi-scale modeling strategy, accompanied by illustrative computational results for bubble formation. In addition, areas which need substantial further attention will be highlighted.
Multiscale Analysis for Field-Effect Penetration through Two-Dimensional Materials.
Tian, Tian; Rice, Peter; Santos, Elton J G; Shih, Chih-Jen
2016-08-10
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson-Boltzmann equation methods to model penetration of the field effect through graphene in a metal-oxide-graphene-semiconductor (MOGS) QC, including quantifying the degree of "transparency" for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Hambli, Ridha
2011-01-01
The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate bone remodelling process. Because whole bone simulation considering the 3D trabecular level is time consuming, the finite element calculation is performed at macroscopic level and a trained neural network are employed as numerical devices for substituting the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at macroscopic scale depending on the morphological organization at the mesoscopic computed by the trained neural network. The digital image-based modeling technique using m-CT and voxel finite element mesh is used to capture 2 mm3 Representative Volume Elements at mesoscale level in a femur head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied str...
Equation-Free Multiscale Computation enabling microscopic simulators to perform system-level tasks
Kevrekidis, Yu G; Hyman, J M; Kevrekidis, P G; Runborg, O; Theodoropoulos, C; Kevrekidis, Ioannis G.; Hyman, James M.; Kevrekidis, Panagiotis G.; Runborg, Olof; Theodoropoulos, Constantinos
2002-01-01
We present and discuss a framework for computer-aided multiscale analysis, which enables models at a "fine" (microscopic/stochastic) level of description to perform modeling tasks at a "coarse" (macroscopic, systems) level. These macroscopic modeling tasks, yielding information over long time and large space scales, are accomplished through appropriately initialized calls to the microscopic simulator for only short times and small spatial domains. Our equation-free (EF) approach, when successful, can bypass the derivation of the macroscopic evolution equations when these equations conceptually exist but are not available in closed form. We discuss how the mathematics-assisted development of a computational superstructure may enable alternative descriptions of the problem physics (e.g. Lattice Boltzmann (LB), kinetic Monte Carlo (KMC) or Molecular Dynamics (MD) microscopic simulators, executed over relatively short time and space scales) to perform systems level tasks (integration over relatively large time an...
Loizos, Kyle; RamRakhyani, Anil Kumar; Anderson, James; Marc, Robert; Lazzi, Gianluca
2016-06-01
This study proposes a methodology for computationally estimating resistive properties of tissue in multi-scale computational models, used for studying the interaction of electromagnetic fields with neural tissue, with applications to both dosimetry and neuroprosthetics. Traditionally, models at bulk tissue- and cellular-level scales are solved independently, linking resulting voltage from existing resistive tissue-scale models as extracellular sources to cellular models. This allows for solving the effects that external electric fields have on cellular activity. There are two major limitations to this approach: first, the resistive properties of the tissue need to be chosen, of which there are contradicting measurements in literature; second, the measurements of resistivity themselves may be inaccurate, leading to the mentioned contradicting results found across different studies. Our proposed methodology allows for constructing computed resistivity profiles using knowledge of only the neural morphology within the multi-scale model, resulting in a practical implementation of the effective medium theory; this bypasses concerns regarding the choice of resistive properties and accuracy of measurement setups. A multi-scale model of retina is constructed with an external electrode to serve as a test bench for analyzing existing and resulting resistivity profiles, and validation is presented through the reconstruction of a published resistivity profile of retina tissue. Results include a computed resistivity profile of retina tissue for use with a retina multi-scale model used to analyze effects of external electric fields on neural activity.
A survey on stochastic multi-scale modeling in biomechanics: computational challenges
Favino, Marco; Pivkin, Igor
2016-01-01
During the last decade, multi-scale models in mechanics, bio-mechanics and life sciences have gained increasing attention. Using multi-scale approaches, effects on different time and length scales, such as, e.g., cellular and organ scale, can be coupled and their interaction can be studied. Clearly, this requires the development of new mathematical models and numerical methods for multi-scale problems, in order to provide reliable and efficient tools for the investigation of multi-scale effects. Here, we give an overview on existing numerical approaches for multi-scale simulations in bio-mechanics with particular emphasis on stochastic effects.
Lardeau, Sylvain; Ferrari, Simone; Rossi, Lionel
2008-12-01
Three-dimensional (3D) direct numerical simulations of a flow driven by multiscale electromagnetic forcing are performed in order to reproduce with maximum accuracy the quasi-two-dimensional (2D) flow generated by the same multiscale forcing in the laboratory. The method presented is based on a 3D description of the flow and the electromagnetic forcing. Very good agreements between our simulations and the experiments are found both on velocity and acceleration field, this last comparison being, to our knowledge, done for the first time. Such agreement requires that both experiments and simulations are carefully performed and, more importantly, that the underlying simplification to model the experiments and the multiscale electromagnetic forcing do not introduce significant errors. The results presented in this paper differ significantly from previous 2D direct numerical simulation in which a classical linear Rayleigh friction modeling term was used to mimic the effect of the wall-normal friction. Indeed, purely 2D simulations are found to underestimate the Reynolds number and, due to the dominance of nonhomogeneous bottom friction, lead to the wrong physical mechanism. For the range of conditions presented in this paper, the Reynolds number, defined by the ratio between acceleration and viscous terms, remains the order of unity, and the Hartmann number, defined by the ratio between electromagnetic force terms and viscous terms, is about 2. The main conclusion is that 3D simulations are required to model the (3D) electromagnetic forces and the wall-normal shear. Indeed, even if the flow is quasi-2D in terms of energy, a full 3D approach is required to simulate these shallow layer flows driven by multiscale electromagnetic forcing. In the range of forcing intensity investigated in this paper, these multiscale flows remain quasi-2D, with negligible energy in the wall-normal velocity component. It is also shown that the driving terms are the electromagnetic forcing and
Examples of computational approaches for elliptic, possibly multiscale PDEs with random inputs
Le Bris, Claude; Legoll, Frédéric
2017-01-01
We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.
Liang, Fuyou; Oshima, Marie; Huang, Huaxiong; Liu, Hao; Takagi, Shu
2015-10-01
Free outflow boundary conditions have been widely adopted in hemodynamic model studies, they, however, intrinsically lack the ability to account for the regulatory mechanisms of systemic hemodynamics and hence carry a risk of producing incorrect results when applied to vascular segments with multiple outlets. In the present study, we developed a multiscale model capable of incorporating global cardiovascular properties into the simulation of blood flows in local vascular segments. The multiscale model was constructed by coupling a three-dimensional (3D) model of local arterial segments with a zero-one-dimensional (0-1-D) model of the cardiovascular system. Numerical validation based on an idealized model demonstrated the ability of the multiscale model to preserve reasonable pressure/flow wave transmission among different models. The multiscale model was further calibrated with clinical data to simulate cerebroarterial hemodynamics in a patient undergoing carotid artery operation. The results showed pronounced hemodynamic changes in the cerebral circulation following the operation. Additional numerical experiments revealed that a stand-alone 3D model with free outflow conditions failed to reproduce the results obtained by the multiscale model. These results demonstrated the potential advantage of multiscale modeling over single-scale modeling in patient-specific hemodynamic studies. Due to the fact that the present study was limited to a single patient, studies on more patients would be required to further confirm the findings.
Zeng, X.; Scovazzi, G.
2016-06-01
We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.
Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy
Garbey, M.; Salmon, R.; Thanoon, D.; Bass, B. L.
2013-07-01
Surgery for early stage breast carcinoma is either total mastectomy (complete breast removal) or surgical lumpectomy (only tumor removal). The lumpectomy or partial mastectomy is intended to preserve a breast that satisfies the woman's cosmetic, emotional and physical needs. But in a fairly large number of cases the cosmetic outcome is not satisfactory. Today, predicting that surgery outcome is essentially based on heuristic. Modeling such a complex process must encompass multiple scales, in space from cells to tissue, as well as in time, from minutes for the tissue mechanics to months for healing. The goal of this paper is to present a first step in multiscale modeling of the long time scale prediction of breast shape after tumor resection. This task requires coupling very different mechanical and biological models with very different computing needs. We provide a simple illustration of the application of heterogeneous distributed computing and modular software design to speed up the model development. Our computational framework serves currently to test hypothesis on breast tissue healing in a pilot study with women who have been elected to undergo BCT and are being treated at the Methodist Hospital in Houston, TX.
Caiazzo, A.; Evans, D.; Falcone, J.-L.; Hegewald, J.; Lorenz, E.; Stahl, B.; Wang, D.; Bernsdorf, J.; Chopard, B.; Gunn, J.; Hose, R.; Krafczyk, M.; Lawford, P.; Smallwood, R.; Walker, D.; Hoekstra, A.
2011-01-01
In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata model for in-stent restenosis, coupling bulk flow, drug diffusion, and s
Multi-scale computational method for elastic bodies with global and local heterogeneity
Takano, Naoki; Zako, Masaru; Ishizono, Manabu
2000-05-01
A multi-scale computational method using the homogenization theory and the finite element mesh superposition technique is presented for the stress analysis of composite materials and structures from both micro- and macroscopic standpoints. The proposed method is based on the continuum mechanics, and the micro-macro coupling effects are considered for a variety of composites with very complex microstructures. To bridge the gap of the length scale between the microscale and the macroscale, the homogenized material model is basically used. The classical homogenized model can be applied to the case that the microstructures are periodically arrayed in the structure and that the macroscopic strain field is uniform within the microscopic unit cell domain. When these two conditions are satisfied, the homogenization theory provides the most reliable homogenized properties rigorously to the continuum mechanics. This theory can also calculate the microscopic stresses as well as the macroscopic stresses, which is the most attractive advantage of this theory over other homogenizing techniques such as the rule of mixture. The most notable feature of this paper is to utilize the finite element mesh superposition technique along with the homogenization theory in order to analyze cases where non-periodic local heterogeneity exists and the macroscopic field is non-uniform. The accuracy of the analysis using the finite element mesh superposition technique is verified through a simple example. Then, two numerical examples of knitted fabric composite materials and particulate reinforced composite material are shown. In the latter example, a shell-solid connection is also adopted for the cost-effective multi-scale modeling and analysis.
ndex Structure for the Multi-scale Representation of Multi-dimensional Spatial Data in WebGIS
ZhihanLV
2010-05-01
Full Text Available To solve the problem that existing data structure cannot support the multi-scale representation of multi-dimensional spatial data in Web Geographic Information System (WebGIS, a modified data structurehas been put forward: (1 getting the main tree from the deformation of the index structure of region quadtree partitioned on the basis of the rule of pyramid structure; (2 possessing the sub-tree structure supporting the overlap of multi-dimensional spatial data; (3reflecting the changes in spatial resolution using the depth of thetree; (4all the nodes of the tree are the container of spatial objects. The necessity of generating the index is analyzed which is described. The algorithm for data generation of index structure, the support to the multi-dimensional data and the query process isdiscussed. For the same data source, some comparative experiments are provided using this structure and layer, and the result shows that this index method can represent and search massive multi-dimensional spatial data effectively in WebGIS. The structure has been used inShanghai multi-dimensional WebGIS system.
Multiscale coupling of molecular dynamics and peridynamics
Tong, Qi; Li, Shaofan
2016-10-01
We propose a multiscale computational model to couple molecular dynamics and peridynamics. The multiscale coupling model is based on a previously developed multiscale micromorphic molecular dynamics (MMMD) theory, which has three dynamics equations at three different scales, namely, microscale, mesoscale, and macroscale. In the proposed multiscale coupling approach, we divide the simulation domain into atomistic region and macroscale region. Molecular dynamics is used to simulate atom motions in atomistic region, and peridynamics is used to simulate macroscale material point motions in macroscale region, and both methods are nonlocal particle methods. A transition zone is introduced as a messenger to pass the information between the two regions or scales. We employ the "supercell" developed in the MMMD theory as the transition element, which is named as the adaptive multiscale element due to its ability of passing information from different scales, because the adaptive multiscale element can realize both top-down and bottom-up communications. We introduce the Cauchy-Born rule based stress evaluation into state-based peridynamics formulation to formulate atomistic-enriched constitutive relations. To mitigate the issue of wave reflection on the interface, a filter is constructed by switching on and off the MMMD dynamic equations at different scales. Benchmark tests of one-dimensional (1-D) and two-dimensional (2-D) wave propagations from atomistic region to macro region are presented. The mechanical wave can transit through the interface smoothly without spurious wave deflections, and the filtering process is proven to be efficient.
Dai, Gaoming; Mishnaevsky, Leon, Jr.
2014-01-01
3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro–micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement...
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Johan Debayle
2011-05-01
Full Text Available An image analysis method has been developed in order to compute the velocity field of a granular medium (sand grains, mean diameter 600 μm submitted to different kinds of mechanical stresses. The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel. A multiscale, coarse-to-fine, analytical approach through tailor sized windows yields the best compromise between accuracy and robustness of the results, while enabling an acceptable computation time. The corresponding algorithmis presented and its validation discussed through different tests. The results of the validation tests of the proposed approach show that the method is satisfactory when attributing specific values to parameters in association with the size of the image analysis window. An application in the case of vibrated sand has been studied. An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes. At 50 Hz, by increasing the relative acceleration G, the onset and development of two convective rolls can be observed. An ultra fast camera records the grain avalanches, and several pairs of images are analysed by the proposed method. The vertical velocity profiles are deduced and allow to precisely quantify the dimensions of the fluidized region as a function of G.
Mechanics of low-dimensional carbon nanostructures: Atomistic, continuum, and multi-scale approaches
Mahdavi, Arash
A new multiscale modeling technique called the Consistent Atomic-scale Finite Element (CAFE) method is introduced. Unlike traditional approaches for linking the atomic structure to its equivalent continuum, this method directly connects the atomic degrees of freedom to a reduced set of finite element degrees of freedom without passing through an intermediate homogenized continuum. As a result, there is no need to introduce stress and strain measures at the atomic level. The Tersoff-Brenner interatomic potential is used to calculate the consistent tangent stiffness matrix of the structure. In this finite element formulation, all local and non-local interactions between carbon atoms are taken into account using overlapping finite elements. In addition, a consistent hierarchical finite element modeling technique is developed for adaptively coarsening and refining the mesh over different parts of the model. This process is consistent with the underlying atomic structure and, by refining the mesh to the scale of atomic spacing, molecular dynamic results can be recovered. This method is valid across the scales and can be used to concurrently model atomistic and continuum phenomena so, in contrast with most other multi-scale methods, there is no need to introduce artificial boundaries for coupling atomistic and continuum regions. Effect of the length scale of the nanostructure is also included in the model by building the hierarchy of elements from bottom up using a finite size atom cluster as the building block. To be consistent with the bravais multi-lattice structure of sp2-bonded carbon, two independent displacement fields are used for reducing the order of the model. Sparse structure of the stiffness matrix of these nanostructures is exploited to reduce the memory requirement and to speed up the formation of the system matrices and solution of the equilibrium equations. Applicability of the method is shown with several examples of the nonlinear mechanics of carbon
CODE BLUE: Three dimensional massively-parallel simulation of multi-scale configurations
Juric, Damir; Kahouadji, Lyes; Chergui, Jalel; Shin, Seungwon; Craster, Richard; Matar, Omar
2016-11-01
We present recent progress on BLUE, a solver for massively parallel simulations of fully three-dimensional multiphase flows which runs on a variety of computer architectures from laptops to supercomputers and on 131072 threads or more (limited only by the availability to us of more threads). The code is wholly written in Fortran 2003 and uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of a hybrid Front Tracking/Level Set method designed to handle highly deforming interfaces with complex topology changes. We developed parallel GMRES and multigrid iterative solvers suited to the linear systems arising from the implicit solution for the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across fluid phases. Particular attention is drawn to the details and performance of the parallel Multigrid solver. EPSRC UK Programme Grant MEMPHIS (EP/K003976/1).
Runge, Keith; Muralidharan, Krishna
2016-01-01
This book presents cutting-edge concepts, paradigms, and research highlights in the field of computational materials science and engineering, and provides a fresh, up-to-date perspective on solving present and future materials challenges. The chapters are written by not only pioneers in the fields of computational materials chemistry and materials science, but also experts in multi-scale modeling and simulation as applied to materials engineering. Pedagogical introductions to the different topics and continuity between the chapters are provided to ensure the appeal to a broad audience and to address the applicability of integrated computational materials science and engineering for solving real-world problems.
Computations in finite-dimensional Lie algebras
A. M. Cohen
1997-12-01
Full Text Available This paper describes progress made in context with the construction of a general library of Lie algebra algorithms, called ELIAS (Eindhoven Lie Algebra System, within the computer algebra package GAP. A first sketch of the package can be found in Cohen and de Graaf[1]. Since then, in a collaborative effort with G. Ivanyos, the authors have continued to develop algorithms which were implemented in ELIAS by the second author. These activities are part of a bigger project, called ACELA and financed by STW, the Dutch Technology Foundation, which aims at an interactive book on Lie algebras (cf. Cohen and Meertens [2]. This paper gives a global description of the main ways in which to present Lie algebras on a computer. We focus on the transition from a Lie algebra abstractly given by an array of structure constants to a Lie algebra presented as a subalgebra of the Lie algebra of n×n matrices. We describe an algorithm typical of the structure analysis of a finite-dimensional Lie algebra: finding a Levi subalgebra of a Lie algebra.
Crops in silico: A community wide multi-scale computational modeling framework of plant canopies
Srinivasan, V.; Christensen, A.; Borkiewic, K.; Yiwen, X.; Ellis, A.; Panneerselvam, B.; Kannan, K.; Shrivastava, S.; Cox, D.; Hart, J.; Marshall-Colon, A.; Long, S.
2016-12-01
Current crop models predict a looming gap between supply and demand for primary foodstuffs over the next 100 years. While significant yield increases were achieved in major food crops during the early years of the green revolution, the current rates of yield increases are insufficient to meet future projected food demand. Furthermore, with projected reduction in arable land, decrease in water availability, and increasing impacts of climate change on future food production, innovative technologies are required to sustainably improve crop yield. To meet these challenges, we are developing Crops in silico (Cis), a biologically informed, multi-scale, computational modeling framework that can facilitate whole plant simulations of crop systems. The Cis framework is capable of linking models of gene networks, protein synthesis, metabolic pathways, physiology, growth, and development in order to investigate crop response to different climate scenarios and resource constraints. This modeling framework will provide the mechanistic details to generate testable hypotheses toward accelerating directed breeding and engineering efforts to increase future food security. A primary objective for building such a framework is to create synergy among an inter-connected community of biologists and modelers to create a realistic virtual plant. This framework advantageously casts the detailed mechanistic understanding of individual plant processes across various scales in a common scalable framework that makes use of current advances in high performance and parallel computing. We are currently designing a user friendly interface that will make this tool equally accessible to biologists and computer scientists. Critically, this framework will provide the community with much needed tools for guiding future crop breeding and engineering, understanding the emergent implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem
Review on multiscale modeling and computation%多尺度模拟与计算研究进展
张廼龙; 郭小明
2011-01-01
简要介绍了多尺问题与研究方法.重点论述了两类常见多尺度问题的模拟计算方法与研究进展,分析了各自的优缺点和使用范围.对现有研究的局限性和存在的问题进行分析,指出了进一步研究多尺度模拟与计算的必要性.介绍了求解含有孤立缺陷问题的非局部准连续体法,MAAD等方法以及求解基于微观模型本构模拟问题的局部连续体法、HMM等方法.文章对多尺度模拟与计算的前景进行展望,提出了一些亟待解决的问题.%Multiscale problems and Multiscale methods are briefly reviewed in this paper.Some multiscale theories and their new achievements in modeling and computation of two typical categories multiscale problems are introduced.The advantages and disadvantages of different methods are also analyzed to evaluate their applicability.The limitations and existing problems are analyzed and the importance of further studies on multiscale methods is pointed out.Mainly, there are Non-local quasi-continuum method, MacroAtomistic ab-initio Dynamics(MAAD) method, Coarse Grained Molecular Dynamics (CGMD) method, Coarse-grained Monte Carlo(CGMC) method and Coupled continuum -MD model for isolated defects problems and local quasi-continuum method, Artificial compressibility method, Gas-kinetic scheme and HMM for constitutive modeling problems.Finally, some problems on this subject are explained.
Dwaipayan Mukherjee
Full Text Available A computational, multiscale toxicodynamic model has been developed to quantify and predict pulmonary effects due to uptake of engineered nanomaterials (ENMs in mice. The model consists of a collection of coupled toxicodynamic modules, that were independently developed and tested using information obtained from the literature. The modules were developed to describe the dynamics of tissue with explicit focus on the cells and the surfactant chemicals that regulate the process of breathing, as well as the response of the pulmonary system to xenobiotics. Alveolar type I and type II cells, and alveolar macrophages were included in the model, along with surfactant phospholipids and surfactant proteins, to account for processes occurring at multiple biological scales, coupling cellular and surfactant dynamics affected by nanoparticle exposure, and linking the effects to tissue-level lung function changes. Nanoparticle properties such as size, surface chemistry, and zeta potential were explicitly considered in modeling the interactions of these particles with biological media. The model predictions were compared with in vivo lung function response measurements in mice and analysis of mice lung lavage fluid following exposures to silver and carbon nanoparticles. The predictions were found to follow the trends of observed changes in mouse surfactant composition over 7 days post dosing, and are in good agreement with the observed changes in mouse lung function over the same period of time.
A Multiscale Computational Model of the Response of Swine Epidermis After Acute Irradiation
Hu, Shaowen; Cucinotta, Francis A.
2012-01-01
Radiation exposure from Solar Particle Events can lead to very high skin dose for astronauts on exploration missions outside the protection of the Earth s magnetic field [1]. Assessing the detrimental effects to human skin under such adverse conditions could be predicted by conducting territorial experiments on animal models. In this study we apply a computational approach to simulate the experimental data of the radiation response of swine epidermis, which is closely similar to human epidermis [2]. Incorporating experimentally measured histological and cell kinetic parameters into a multiscale tissue modeling framework, we obtain results of population kinetics and proliferation index comparable to unirradiated and acutely irradiated swine experiments [3]. It is noted the basal cell doubling time is 10 to 16 days in the intact population, but drops to 13.6 hr in the regenerating populations surviving irradiation. This complex 30-fold variation is proposed to be attributed to the shortening of the G1 phase duration. We investigate this radiation induced effect by considering at the sub-cellular level the expression and signaling of TGF-beta, as it is recognized as a key regulatory factor of tissue formation and wound healing [4]. This integrated model will allow us to test the validity of various basic biological rules at the cellular level and sub-cellular mechanisms by qualitatively comparing simulation results with published research, and should lead to a fuller understanding of the pathophysiological effects of ionizing radiation on the skin.
A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic
Bogdan F. Ion
2014-09-01
Full Text Available Nicotinamidase (Nic is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic. In particular, density functional theory (DFT, molecular dynamics (MD and ONIOM quantum mechanics/molecular mechanics (QM/MM methods have been employed. The overall mechanism occurs in two stages: (i formation of a thioester enzyme-intermediate (IC2 and (ii hydrolysis of the thioester bond to give the products. The polar protein environment has a significant effect in stabilizing reaction intermediates and in particular transition states. As a result, both stages effectively occur in one step with Stage 1, formation of IC2, being rate limiting barrier with a cost of 53.5 kJ•mol−1 with respect to the reactant complex, RC. The effects of dispersion interactions on the overall mechanism were also considered but were generally calculated to have less significant effects with the overall mechanism being unchanged. In addition, the active site lysyl (Lys103 is concluded to likely play a role in stabilizing the thiolate of Cys136 during the reaction.
Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue
2011-12-01
As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e
Computer-Generated, Three-Dimensional Character Animation.
Van Baerle, Susan Lynn
This master's thesis begins by discussing the differences between 3-D computer animation of solid three-dimensional, or monolithic, objects, and the animation of characters, i.e., collections of movable parts with soft pliable surfaces. Principles from two-dimensional character animation that can be transferred to three-dimensional character…
Microstructural analysis of TRISO particles using multi-scale X-ray computed tomography
Lowe, T.; Bradley, R. S.; Yue, S.; Barii, K.; Gelb, J.; Rohbeck, N.; Turner, J.; Withers, P. J.
2015-06-01
TRISO particles, a composite nuclear fuel built up by ceramic and graphitic layers, have outstanding high temperature resistance. TRISO fuel is the key technology for High Temperature Reactors (HTRs) and the Generation IV Very High Temperature Reactor (VHTR) variant. TRISO offers unparalleled containment of fission products and is extremely robust during accident conditions. An understanding of the thermal performance and mechanical properties of TRISO fuel requires a detailed knowledge of pore sizes, their distribution and interconnectivity. Here 50 nm, nano-, and 1 μm resolution, micro-computed tomography (CT), have been used to quantify non-destructively porosity of a surrogate TRISO particle at the 0.3-10 μm and 3-100 μm scales respectively. This indicates that pore distributions can reliably be measured down to a size approximately 3 times the pixel size which is consistent with the segmentation process. Direct comparison with Scanning Electron Microscopy (SEM) sections indicates that destructive sectioning can introduce significant levels of coarse damage, especially in the pyrolytic carbon layers. Further comparative work is required to identify means of minimizing such damage for SEM studies. Finally since it is non-destructive, multi-scale time-lapse X-ray CT opens the possibility of intermittently tracking the degradation of TRISO structure under thermal cycles or radiation conditions in order to validate models of degradation such as kernel movement. X-ray CT in-situ experimentation of TRISO particles under load and temperature could also be used to understand the internal changes that occur in the particles under accident conditions.
Microstructural analysis of TRISO particles using multi-scale X-ray computed tomography
Lowe, T., E-mail: tristan.lowe@manchester.ac.uk [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Bradley, R.S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Yue, S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom); Barii, K. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Gelb, J. [Zeiss Xradia Inc., Pleasanton, CA (United States); Rohbeck, N. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Turner, J. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Withers, P.J. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom)
2015-06-15
TRISO particles, a composite nuclear fuel built up by ceramic and graphitic layers, have outstanding high temperature resistance. TRISO fuel is the key technology for High Temperature Reactors (HTRs) and the Generation IV Very High Temperature Reactor (VHTR) variant. TRISO offers unparalleled containment of fission products and is extremely robust during accident conditions. An understanding of the thermal performance and mechanical properties of TRISO fuel requires a detailed knowledge of pore sizes, their distribution and interconnectivity. Here 50 nm, nano-, and 1 μm resolution, micro-computed tomography (CT), have been used to quantify non-destructively porosity of a surrogate TRISO particle at the 0.3–10 μm and 3–100 μm scales respectively. This indicates that pore distributions can reliably be measured down to a size approximately 3 times the pixel size which is consistent with the segmentation process. Direct comparison with Scanning Electron Microscopy (SEM) sections indicates that destructive sectioning can introduce significant levels of coarse damage, especially in the pyrolytic carbon layers. Further comparative work is required to identify means of minimizing such damage for SEM studies. Finally since it is non-destructive, multi-scale time-lapse X-ray CT opens the possibility of intermittently tracking the degradation of TRISO structure under thermal cycles or radiation conditions in order to validate models of degradation such as kernel movement. X-ray CT in-situ experimentation of TRISO particles under load and temperature could also be used to understand the internal changes that occur in the particles under accident conditions.
Carlton, Holly D; Elmer, John W; Li, Yan; Pacheco, Mario; Goyal, Deepak; Parkinson, Dilworth Y; MacDowell, Alastair A
2016-04-13
Synchrotron radiation micro-tomography (SRµT) is a non-destructive three-dimensional (3D) imaging technique that offers high flux for fast data acquisition times with high spatial resolution. In the electronics industry there is serious interest in performing failure analysis on 3D microelectronic packages, many which contain multiple levels of high-density interconnections. Often in tomography there is a trade-off between image resolution and the volume of a sample that can be imaged. This inverse relationship limits the usefulness of conventional computed tomography (CT) systems since a microelectronic package is often large in cross sectional area 100-3,600 mm(2), but has important features on the micron scale. The micro-tomography beamline at the Advanced Light Source (ALS), in Berkeley, CA USA, has a setup which is adaptable and can be tailored to a sample's properties, i.e., density, thickness, etc., with a maximum allowable cross-section of 36 x 36 mm. This setup also has the option of being either monochromatic in the energy range ~7-43 keV or operating with maximum flux in white light mode using a polychromatic beam. Presented here are details of the experimental steps taken to image an entire 16 x 16 mm system within a package, in order to obtain 3D images of the system with a spatial resolution of 8.7 µm all within a scan time of less than 3 min. Also shown are results from packages scanned in different orientations and a sectioned package for higher resolution imaging. In contrast a conventional CT system would take hours to record data with potentially poorer resolution. Indeed, the ratio of field-of-view to throughput time is much higher when using the synchrotron radiation tomography setup. The description below of the experimental setup can be implemented and adapted for use with many other multi-materials.
Vermesi, Izabella; Rein, Guillermo; Colella, Francesco
2017-01-01
directly. The feasibility analysis showed a difference of only 2% in temperature results from the published reference work that was performed with Ansys Fluent (Colella et al., 2010). The reduction in simulation time was significantly larger when using multiscale modelling than when performing multiple...
Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N
2004-04-15
We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.
Ince, Nuri Firat; Tadipatri, Vijay Aditya; Göksu, Fikri; Tewfik, Ahmed H
2009-01-01
Multichannel neural activities such as EEG or ECoG in a brain computer interface can be classified with subset selection algorithms running on large feature dictionaries describing subject specific features in spectral, temporal and spatial domain. While providing high accuracies in classification, the subset selection techniques are associated with long training times due to the large feature set constructed from multichannel neural recordings. In this paper we study a novel denoising technique for reducing the dimensionality of the feature space which decreases the computational complexity of the subset selection step radically without causing any degradation in the final classification accuracy. The denoising procedure was based on the comparison of the energy in a particular time segment and in a given scale/level to the energy of the raw data. By setting denoising threshold a priori the algorithm removes those nodes which fail to capture the energy in the raw data in a given scale. We provide experimental studies towards the classification of motor imagery related multichannel ECoG recordings for a brain computer interface. The denoising procedure was able to reach the same classification accuracy without denoising and a computational complexity around 5 times smaller. We also note that in some cases the denoised procedure performed better classification.
Three-Dimensional Computational Fluid Dynamics
Haworth, D.C.; O' Rourke, P.J.; Ranganathan, R.
1998-09-01
Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.
Computation of Three-Dimensional Combustor Performance
Srivatsa, S.
1985-01-01
Existing steady-state 3-D computer program for calculating gasturbine flow fields modified to include computation of soot and nitrogen oxide emission. In addition, radiation calculation corrected for soot particles. These advanced tools offer potential of reducing design and development time required for gas-turbine combustors.
Singh, H.; Gokhale, A. M.; Mao, Y.; Tewari, A.; Sachdev, A. K.
2009-12-01
The serial sectioning technique is well known for the reconstruction of three-dimensional (3D) microstructures of opaque materials. In recent years, techniques also have been developed for the reconstruction of high-fidelity, large-volume segments of 3D microstructures that use montage serial sections and robot-assisted automated acquisitions of montage serial sections. This article reports the reconstruction of the multiphase, multiscale 3D microstructure of a permanent mold cast unmodified Al-12 wt pct Si-1 wt pct Ni base alloy that contains eutectic Si platelets, coarse primary polyhedral Si particles, Fe-rich script intermetallic particles, and pores. These constituents are segmented, reconstructed, rendered, and characterized in three dimensions. The estimated 3D microstrucutral attributes include the distribution of eutectic platelet thickness; the mean volume, mean surface area, and mean thickness of the eutectic Si platelets; the mean volume and the mean surface area of the polyhedral primary Si particles; and the mean number of faces, edges, and corners on the polyhedral primary Si particles.
Three dimensional multi-scale visual words for texture-based cerebellum segmentation
Foncubierta-Rodríguez, Antonio; Depeursinge, Adrien; Gui, Laura; Müller, Henning
2012-02-01
Segmentation of the various parts of the brain is a challenging area in medical imaging and it is a prerequisite for many image analysis tasks useful for clinical research. Advances have been made in generating brain image templates that can be registered to automatically segment regions of interest in the human brain. However, these methods may fail with some subjects if there is a significant shape distortion or difference from the proposed models. This is also the case of newborns, where the developing brain strongly differs from adult magnetic resonance imaging (MRI) templates. In this article, a texture-based cerebellum segmentation method is described. The algorithm presented does not use any prior spatial knowledge to segment the MRI images. Instead, the system learns the texture features by means of a multi-scale filtering and visual words feature aggregation. Visual words are a commonly used technique in image retrieval. Instead of using visual features directly, the features of specific regions are modeled (clustered) into groups of discriminative features. This means that the final feature space can be reduced in size and also that the visual words in local regions are really discriminative for the given data set. The system is currently trained and tested with a dataset of 18 adult brain MRIs. An extension to the use with newborn brain images is being foreseen as this could highlight the advantages of the proposed technique. Results show that the use of texture features can be valuable for the task described and can lead to good results. The use of visual words can potentially improve robustness of existing shape-based techniques for cases with significant shape distortion or other differences from the models. As the visual words based techniques are not assuming any prior knowledge such techniques could be used for other types of segmentations as well using a large variety of basic visual features.
Research on Three Dimensional Computer Assistance Assembly Process Design System
HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu
2006-01-01
The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.
Computations in finite-dimensional Lie algebras
Cohen, A.M.; Graaf, W.A. de; Rónyai, L.
2001-01-01
This paper describes progress made in context with the construction of a general library of Lie algebra algorithms, called ELIAS (Eindhoven Lie Algebra System), within the computer algebra package GAP. A first sketch of the packagecan be found in Cohen and de Graaf[1]. Since then, in a collaborative
Multiscale modeling and computation of nano-electronic transistors and transmembrane proton channels
Chen, Duan
The miniaturization of nano-scale electronic transistors, such as metal oxide semiconductor field effect transistors (MOSFETs), has given rise to a pressing demand in the new theoretical understanding and practical tactic for dealing with quantum mechanical effects in integrated circuits. In biology, proton dynamics and transport across membrane proteins are of paramount importance to the normal function of living cells. Similar physical characteristics are behind the two subjects, and model simulations share common mathematical interests/challenges. In this thesis work, multiscale and multiphysical models are proposed to study the mechanisms of nanotransistors and proton transport in transmembrane at the atomic level. For nano-electronic transistors, we introduce a unified two-scale energy functional to describe the electrons and the continuum electrostatic potential. This framework enables us to put microscopic and macroscopic descriptions on an equal footing at nano-scale. Additionally, this model includes layered structures and random doping effect of nano-transistors. For transmembrane proton channels, we describe proton dynamics quantum mechanically via a density functional approach while implicitly treat numerous solvent molecules as a dielectric continuum. The densities of all other ions in the solvent are assumed to obey the Boltzmann distribution. The impact of protein molecular structure and its charge polarization on the proton transport is considered in atomic details. We formulate a total free energy functional to include kinetic and potential energies of protons, as well as electrostatic energy of all other ions on an equal footing. For both nano-transistors and proton channels systems, the variational principle is employed to derive nonlinear governing equations. The Poisson-Kohn-Sham equations are derived for nano-transistors while the generalized Poisson-Boltzmann equation and Kohn-Sham equation are obtained for proton channels. Related numerical
Pan, Wenxiao; Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em
2011-01-01
We compare the predictive capability of two mathematical models for red blood cells (RBCs) focusing on blood flow in capillaries and arterioles. Both RBC models as well as their corresponding blood flows are based on the dissipative particle dynamics (DPD) method, a coarse-grained molecular dynamics approach. The first model employs a multiscale description of the RBC (MS-RBC), with its membrane represented by hundreds or even thousands of DPD-particles connected by springs into a triangular network in combination with out-of-plane elastic bending resistance. Extra dissipation within the network accounts for membrane viscosity, while the characteristic biconcave RBC shape is achieved by imposition of constraints for constant membrane area and constant cell volume. The second model is based on a low-dimensional description (LD-RBC) constructed as a closed torus-like ring of only 10 large DPD colloidal particles. They are connected into a ring by worm-like chain (WLC) springs combined with bending resistance. The LD-RBC model can be fitted to represent the entire range of nonlinear elastic deformations as measured by optical-tweezers for healthy and for infected RBCs in malaria. MS-RBCs suspensions model the dynamics and rheology of blood flow accurately for any vessel size but this approach is computationally expensive for vessel diameters above 100 microns. Surprisingly, the much more economical suspensions of LD-RBCs also capture the blood flow dynamics and rheology accurately except for small-size vessels comparable to RBC diameter. In particular, the LD-RBC suspensions are shown to properly capture the experimental data for the apparent viscosity of blood and its cell-free layer (CFL) in tube flow. Taken together, these findings suggest a hierarchical approach in modeling blood flow in the arterial tree, whereby the MS-RBC model should be employed for capillaries and arterioles below 100 microns, the LD-RBC model for arterioles, and the continuum description for
Gavrishchaka, Valeriy; Ganguli, Supriya
2001-10-01
Predictive capabilities of the data-driven models of the systems with complex multi-scale dynamics depend on the quality and amount of the available data and on the algorithms used to extract generalized mappings. Availability of the real-time high-resolution data constantly increases in many fields of practical interest. However, the majority of advanced nonlinear algorithms, including neural networks (NN), can encounter a set of problems called "dimensionality curse" when applied to high-dimensional data. Nonstationarity of the system can also impose significant limitations on the size of training set which leads to poor generalization ability of the model. A very promising algorithm that combines the power of the best nonlinear techniques and tolerance to high-dimensional and incomplete data is support vector machine (SVM). We have summarized and demonstrated advantages of the SVM by applying it to two important and challenging problems: substorm forecasting from solar wind data and volatility forecasting from multi-scale stock and exchange market data. We have shown that performance of the SVM model for substorm prediction can be comparable to or be superior to that of the best existing models including NNs. The advantages of the SVM-based techniques are expected to be much more pronounced in future space-weather forecasting models, which will incorporate many types of high-dimensional, multi-scale input data once real-time availability of this information becomes technologically feasible. We have also demonstrated encouraging performance of the SVM in application to volatility prediction using S&P 500 stock index and USD-DM exchange rate data. Future applications of the SVM in the emerging field of high-frequency finance and its relation to existing models are also discussed.
This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
Residual-driven online generalized multiscale finite element methods
Chung, Eric T.
2015-09-08
The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.
Aristovich, K Y; Khan, S H, E-mail: kirill.aristovich.1@city.ac.u [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)
2010-07-01
Complex multi-scale Finite Element (FE) analyses always involve high number of elements and therefore require very long time of computations. This is caused by the fact, that considered effects on smaller scales have greater influences on the whole model and larger scales. Thus, mesh density should be as high as required by the smallest scale factor. New submodelling routine has been developed to sufficiently decrease the time of computation without loss of accuracy for the whole solution. The presented approach allows manipulation of different mesh sizes on different scales and, therefore total optimization of mesh density on each scale and transfer results automatically between the meshes corresponding to respective scales of the whole model. Unlike classical submodelling routine, the new technique operates with not only transfer of boundary conditions but also with volume results and transfer of forces (current density load in case of electromagnetism), which allows the solution of full Maxwell's equations in FE space. The approach was successfully implemented for electromagnetic solution in the forward problem of Magnetic Field Tomography (MFT) based on Magnetoencephalography (MEG), where the scale of one neuron was considered as the smallest and the scale of whole-brain model as the largest. The time of computation was reduced about 100 times, with the initial requirements of direct computations without submodelling routine of 10 million elements.
Harada, Ryuhei; Kitao, Akio
2012-01-10
The fast-folding mechanism of a 35-residue mini-protein, villin headpiece subdomain (HP35), was investigated using folding free energy landscape analysis with the multiscale free energy landscape calculation method (MSFEL). A major and a minor folding pathway were deduced from the folding free energy landscape. In the major folding pathway, the formation of helices II and III was the rate-limiting step in the transition to an intermediate state, triggered by the folding of the PLWK motif. HP35 then folds into the native structure through the formation of the hydrophobic core located at the center of the three-helix bundle. Mutations in the motif and hydrophobic core that suppressed folding into the native state drastically changed the folding free energy landscape compared to the wild type protein. In the minor folding pathway, nucleation of the hydrophobic core preceded formation of the motif.
Quantum computation with two-dimensional graphene quantum dots
Li Jie-Sen; Li Zhi-Bing; Yao Dao-Xin
2012-01-01
We study an array of graphene nano sheets that form a two-dimensional S =1/2 Kagome spin lattice used for quantum computation.The edge states of the graphene nano sheets axe used to form quantum dots to confine electrons and perform the computation.We propose two schemes of bang-bang control to combat decoherence and realize gate operations on this array of quantum dots.It is shown that both schemes contain a great amount of information for quantum computation.The corresponding gate operations are also proposed.
Deisboeck, Thomas S; Wang, Zhihui; Macklin, Paul; Cristini, Vittorio
2011-08-15
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insights in the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community.
Urologic applications of multiplanar and three-dimensional computed tomography.
Olson, M C; Posniak, H V
1995-01-01
The introduction of helical computed tomography (CT) has resulted in improved quality of multiplanar reformations and three-dimensional reconstructions in the chest and abdomen and has made CT angiography a clinical reality. These imaging techniques are useful for evaluating the urinary tract, adding a new dimension to its display, resulting in improved diagnosis of renal and perirenal disease. This article reviews the indications and techniques utilized for multiplanar and three-dimensional CT for urology. The advantages and limitations are discussed, and normal and pathologic findings in the urinary tract illustrated.
Brodu, Nicolas
2011-01-01
3D point clouds of natural environments relevant to geomorphology problems (rivers, cliffs...) often require to classify the data into elementary relevant classes. A typical example is the separation of riparian vegetation from soil in fluvial environments, the distinction between fresh surfaces and rockfall in cliff environments, or more generally the classification of surfaces according to their morphology (ripples, grain size...). Natural surfaces are very heterogeneous and their distinctive properties are seldom defined at a unique scale. We have thus defined a multi-scale measure of the point cloud dimensionality around each point. The dimensionality characterizes the local 3D organization of the point cloud and varies from being 1D (points set along a line) to really taking all 3D volume, at each scale. We present the technique and illustrate its efficiency in separating riparian vegetation from ground and classifying a mountain stream in vegetation, rock, gravel and water surface. The superiority of th...
Ji-wook Jeong
2016-01-01
Full Text Available We propose computer-aided detection (CADe algorithm for microcalcification (MC clusters in reconstructed digital breast tomosynthesis (DBT images. The algorithm consists of prescreening, MC detection, clustering, and false-positive (FP reduction steps. The DBT images containing the MC-like objects were enhanced by a multiscale Hessian-based three-dimensional (3D objectness response function and a connected-component segmentation method was applied to extract the cluster seed objects as potential clustering centers of MCs. Secondly, a signal-to-noise ratio (SNR enhanced image was also generated to detect the individual MC candidates and prescreen the MC-like objects. Each cluster seed candidate was prescreened by counting neighboring individual MC candidates nearby the cluster seed object according to several microcalcification clustering criteria. As a second step, we introduced bounding boxes for the accepted seed candidate, clustered all the overlapping cubes, and examined. After the FP reduction step, the average number of FPs per case was estimated to be 2.47 per DBT volume with a sensitivity of 83.3%.
Three-dimensional analysis of craniofacial bones using three-dimensional computer tomography
Ono, Ichiro; Ohura, Takehiko; Kimura, Chu (Hokkaido Univ., Sapporo (Japan). School of Medicine) (and others)
1989-08-01
Three-dimensional computer tomography (3DCT) was performed in patients with various diseases to visualize stereoscopically the deformity of the craniofacial bones. The data obtained were analyzed by the 3DCT analyzing system. A new coordinate system was established using the median sagittal plane of the face (a plane passing through sella, nasion and basion) on the three-dimensional image. Three-dimensional profilograms were prepared for detailed analysis of the deformation of craniofacial bones for cleft lip and palate, mandibular prognathia and hemifacial microsomia. For patients, asymmetry in the frontal view and twist-formed complicated deformities were observed, as well as deformity of profiles in the anteroposterior and up-and-down directions. A newly developed technique allows three-dimensional visualization of changes in craniofacial deformity. It would aid in determining surgical strategy, including crani-facial surgery and maxillo-facial surgery, and in evaluating surgical outcome. (N.K.).
Pan, Wenxiao; Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2011-09-01
We compare the predictive capability of two mathematical models for red blood cells (RBCs) focusing on blood flow in capillaries and arterioles. Both RBC models as well as their corresponding blood flows are based on the dissipative particle dynamics (DPD) method, a coarse-grained molecular dynamics approach. The first model employs a multiscale description of the RBC (MS-RBC), with its membrane represented by hundreds or even thousands of DPD-particles connected by springs into a triangular network in combination with out-of-plane elastic bending resistance. Extra dissipation within the network accounts for membrane viscosity, while the characteristic biconcave RBC shape is achieved by imposition of constraints for constant membrane area and constant cell volume. The second model is based on a low-dimensional description (LD-RBC) constructed as a closed torus-like ring of only 10 large DPD colloidal particles. They are connected into a ring by worm-like chain (WLC) springs combined with bending resistance. The LD-RBC model can be fitted to represent the entire range of nonlinear elastic deformations as measured by optical-tweezers for healthy and for infected RBCs in malaria. MS-RBCs suspensions model the dynamics and rheology of blood flow accurately for any vessel size but this approach is computationally expensive for vessel diameters above 100μm. Surprisingly, the much more economical suspensions of LD-RBCs also capture the blood flow dynamics and rheology accurately except for small-size vessels comparable to RBC diameter. In particular, the LD-RBC suspensions are shown to properly capture the experimental data for the apparent viscosity of blood and its cell-free layer (CFL) in tube flow. Taken together, these findings suggest a hierarchical approach in modeling blood flow in the arterial tree, whereby the MS-RBC model should be employed for capillaries and arterioles below 100μm, the LD-RBC model for arterioles, and the continuum description for arteries.
Three-dimensional protein structure prediction: Methods and computational strategies.
Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C
2014-10-12
A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.
Fedosov, Dmitry
2011-03-01
Computational biophysics is a large and rapidly growing area of computational physics. In this talk, we will focus on a number of biophysical problems related to blood cells and blood flow in health and disease. Blood flow plays a fundamental role in a wide range of physiological processes and pathologies in the organism. To understand and, if necessary, manipulate the course of these processes it is essential to investigate blood flow under realistic conditions including deformability of blood cells, their interactions, and behavior in the complex microvascular network. Using a multiscale cell model we are able to accurately capture red blood cell mechanics, rheology, and dynamics in agreement with a number of single cell experiments. Further, this validated model yields accurate predictions of the blood rheological properties, cell migration, cell-free layer, and hemodynamic resistance in microvessels. In addition, we investigate blood related changes in malaria, which include a considerable stiffening of red blood cells and their cytoadherence to endothelium. For these biophysical problems computational modeling is able to provide new physical insights and capabilities for quantitative predictions of blood flow in health and disease.
Standard Model in multi-scale theories and observational constraints
Calcagni, Gianluca; Rodríguez-Fernández, David
2015-01-01
We construct and analyze the Standard Model of electroweak and strong interactions in multi-scale spacetimes with (i) weighted derivatives and (ii) $q$-derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multi-scale measures with only one characteristic time, length and energy scale $t_*$, $\\ell_*$ and $E_*$, we compute the Lamb shift in the hydrogen atom and constrain the multi-scale correction to the ordi...
Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir; Wei, Jun; Helvie, Mark A. [Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109-5842 (United States); Sahiner, Berkman [Center for Devices and Radiological Health, U.S. Food and Drug Administration, Maryland 20993 (United States)
2014-02-15
Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was further improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a
Three-dimensional computer visualization of forensic pathology data.
March, Jack; Schofield, Damian; Evison, Martin; Woodford, Noel
2004-03-01
Despite a decade of use in US courtrooms, it is only recently that forensic computer animations have become an increasingly important form of communication in legal spheres within the United Kingdom. Aims Research at the University of Nottingham has been influential in the critical investigation of forensic computer graphics reconstruction methodologies and techniques and in raising the profile of this novel form of data visualization within the United Kingdom. The case study presented demonstrates research undertaken by Aims Research and the Department of Forensic Pathology at the University of Sheffield, which aims to apply, evaluate, and develop novel 3-dimensional computer graphics (CG) visualization and virtual reality (VR) techniques in the presentation and investigation of forensic information concerning the human body. The inclusion of such visualizations within other CG or VR environments may ultimately provide the potential for alternative exploratory directions, processes, and results within forensic pathology investigations.
Computationally Driven Two-Dimensional Materials Design: What Is Next?
Pan, Jie [Materials Science; Lany, Stephan [Materials Science; Qi, Yue [Department of Chemical Engineering and Materials Science, Michigan State University, East Lansing, Michigan 48824, United States
2017-07-17
Two-dimensional (2D) materials offer many key advantages to innovative applications, such as spintronics and quantum information processing. Theoretical computations have accelerated 2D materials design. In this issue of ACS Nano, Kumar et al. report that ferromagnetism can be achieved in functionalized nitride MXene based on first-principles calculations. Their computational results shed light on a potentially vast group of materials for the realization of 2D magnets. In this Perspective, we briefly summarize the promising properties of 2D materials and the role theory has played in predicting these properties. In addition, we discuss challenges and opportunities to boost the power of computation for the prediction of the 'structure-property-process (synthesizability)' relationship of 2D materials.
Hyeon Seo
2017-05-01
Full Text Available The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
Bhattacharjee, Satyaki; Matouš, Karel
2016-05-01
A new manifold-based reduced order model for nonlinear problems in multiscale modeling of heterogeneous hyperelastic materials is presented. The model relies on a global geometric framework for nonlinear dimensionality reduction (Isomap), and the macroscopic loading parameters are linked to the reduced space using a Neural Network. The proposed model provides both homogenization and localization of the multiscale solution in the context of computational homogenization. To construct the manifold, we perform a number of large three-dimensional simulations of a statistically representative unit cell using a parallel finite strain finite element solver. The manifold-based reduced order model is verified using common principles from the machine-learning community. Both homogenization and localization of the multiscale solution are demonstrated on a large three-dimensional example and the local microscopic fields as well as the homogenized macroscopic potential are obtained with acceptable engineering accuracy.
Saeed Madani, Seyed; Swierczynski, Maciej Jozef; Kær, Søren Knudsen
2017-01-01
This paper gives insight into the discharge behavior of lithium-ion batteries based on the investigations, which have been done by the researchers [1– 19]. In this article, the battery's discharge behaviour at various discharge rates is studied and surface monitor, discharge curve, volume monitor...... to analysis the discharge behaviour of lithium-ion batteries. The results show that surface monitor plot of discharge curve at 1 C has a decreasing trend and volume monitor plot of maximum temperature in the domain has slightly increasing pattern over the simulation time. For the curves of discharge...... plot of maximum temperature in the domain and maximum temperature in the area are illustrated. Additionally, an external and internal short-circuit treatment for three cases have been studied. The Dual-Potential Multi-Scale Multi-Dimensional (MSMD) Battery Model (BM) was used by ANSYS FLUENT software...
Dimensional measurement of micro-moulded parts by computed tomography
Ontiveros, S.; Yagüe-Fabra, J.A.; Jiménez, R.
2012-01-01
Computed tomography (CT) is progressively assuming an important role in metrology applications and great efforts are being made in order to turn it into a reliable and standardized measuring technology. CT is typically used for non-destructive tests, but it is currently becoming very popular...... for dimensional metrology applications due to its strategic advantages such as the capability of performing measurements on both the component's surface and volume, allowing inspection possibilities to otherwise non-accessible internal features. This paper focuses on the dimensional verification of two micro......-injection moulded components, selected from actual industrial productions, using CT metrological tools. For this purpose, several parts have been measured with two different CT machines, and the results have been compared with the measurements obtained by other measuring systems. The experimental work carried out...
Application of computer-aided multi-scale modelling framework – Aerosol case study
Heitzig, Martina; Sin, Gürkan; Glarborg, Peter
Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer...... development part supports the modeller in model documentation, construction and analysis.Different models for properties, phenomena, unit operations, processes can be developed and analysed here or retrieved from model libraries.The model equations are introduced in a simple text format and are translated......-aided methods provide. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task involving...
Multi-scale Computer Simulations to Study the Reaction Zone of Solid Explosives
Reaugh, J E
2006-06-23
We have performed computer simulations at several different characteristic length scales to study the coupled mechanical, thermal, and chemical behavior of explosives under shock and other pressure loadings. Our objective is to describe the underlying physics and chemistry of the hot-spot theory for solid explosives, with enough detail to make quantitative predictions of the expected result from a given pressure loading.
Multi-scale Computer Simulations to Study the Reaction Zone of Solid Explosives
Reaugh, J E
2006-06-23
We have performed computer simulations at several different characteristic length scales to study the coupled mechanical, thermal, and chemical behavior of explosives under shock and other pressure loadings. Our objective is to describe the underlying physics and chemistry of the hot-spot theory for solid explosives, with enough detail to make quantitative predictions of the expected result from a given pressure loading.
CATIA Core Tools Computer Aided Three-Dimensional Interactive Application
Michaud, Michel
2012-01-01
CATIA Core Tools: Computer-Aided Three-Dimensional Interactive Application explains how to use the essential features of this cutting-edge solution for product design and innovation. The book begins with the basics, such as launching the software, configuring the settings, and managing files. Next, you'll learn about sketching, modeling, drafting, and visualization tools and techniques. Easy-to-follow instructions along with detailed illustrations and screenshots help you get started using several CATIA workbenches right away. Reverse engineering--a valuable product development skill--is also covered in this practical resource.
A three-dimensional magnetostatics computer code for insertion devices.
Chubar, O; Elleaume, P; Chavanne, J
1998-05-01
RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.
Application of computer-aided multi-scale modelling framework – Aerosol case study
Heitzig, Martina; Sin, Gürkan; Glarborg, Peter
-aided methods provide. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task involving...... numerous steps, expert skills and different modelling tools. This motivates the development of a computer-aided modelling framework that supports the user during model development, documentation, analysis, identification, application and re-use with the goal to increase the efficiency of the modelling...... generation, optimal equation ordering, eigenvalue analysis. Once the models have been constructed and analysed the modelling framework incorporates 3 application work-flows for: identification, simulation and design. For these application work-flows different solvers that can solve a large range of different...
2009-08-14
involving particle simu- lation of fluid flows at the mesoscale. The Smooth Dissipative Particle Dynamics (SDPD) method developed by Espanol and...coefficients in the SDPD model equations are given in Espanol and Revenga (2003), 30 2.1 SDPD Computational Implementation The SDPD model has been...pp. 786-793. 10 Espanol , P. and Revenga, M. Smoothed Dissipative Particle Dynamics. Physical Review, 2003, Vol. 67. 31 Liu G.R. and Liu, M.B
2016-07-15
RHESA with two levels to compute the far-field contributions from source group cluster to observation group cluster A dielectric sphere...H. Lo, and W.C. Chew, "Second-harmonic DISTRIBUTION A. Approved for public release: distribution unlimited. Generation in Metal Nanoparticles...Chen, “Model order reduction for quantum transport simulation of nanoelectronic devices,” the 14th IEEE HK AP/MTT Postgraduate Conference, Hong Kong
Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)
2017-07-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
Guerrier, C.; Holcman, D.
2017-07-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.
2016-07-01
We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.
Multiscale spacetimes from first principles
Calcagni, Gianluca
2016-01-01
We formulate a theorem for the general profile of the Hausdorff and the spectral dimension of multiscale geometries, assuming a smooth and slow change of spacetime dimensionality at large scales. Agreement with various scenarios of quantum gravity is found. In particular, we derive uniquely the multiscale measure with log oscillations of theories of multifractional geometry. Predictivity of this class of models and falsifiability of their abundant phenomenology are thus established.
Michael V. Glazoff
2014-10-01
In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning
Glazoff, Michael Vasily [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2014-10-01
In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning
DeSimone Douglas W
2007-10-01
Full Text Available Abstract Background Tissue morphogenesis is a complex process whereby tissue structures self-assemble by the aggregate behaviors of independently acting cells responding to both intracellular and extracellular cues in their environment. During embryonic development, morphogenesis is particularly important for organizing cells into tissues, and although key regulatory events of this process are well studied in isolation, a number of important systems-level questions remain unanswered. This is due, in part, to a lack of integrative tools that enable the coupling of biological phenomena across spatial and temporal scales. Here, we present a new computational framework that integrates intracellular signaling information with multi-cell behaviors in the context of a spatially heterogeneous tissue environment. Results We have developed a computational simulation of mesendoderm migration in the Xenopus laevis explant model, which is a well studied biological model of tissue morphogenesis that recapitulates many features of this process during development in humans. The simulation couples, via a JAVA interface, an ordinary differential equation-based mass action kinetics model to compute intracellular Wnt/β-catenin signaling with an agent-based model of mesendoderm migration across a fibronectin extracellular matrix substrate. The emergent cell behaviors in the simulation suggest the following properties of the system: maintaining the integrity of cell-to-cell contact signals is necessary for preventing fractionation of cells as they move, contact with the Fn substrate and the existence of a Fn gradient provides an extracellular feedback loop that governs migration speed, the incorporation of polarity signals is required for cells to migrate in the same direction, and a delicate balance of integrin and cadherin interactions is needed to reproduce experimentally observed migratory behaviors. Conclusion Our computational framework couples two different
Multi-scale computational enzymology: enhancing our understanding of enzymatic catalysis.
Gherib, Rami; Dokainish, Hisham M; Gauld, James W
2013-12-31
Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM) can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis.
Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis
Rami Gherib
2013-12-01
Full Text Available Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis.
Generalized multiscale finite element methods (GMsFEM)
Efendiev, Yalchin R.
2013-10-01
In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.
Glazoff, Michael Vasily
In the post-Fukushima world, thermal and structural stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry will continue using zirconium (Zr) cladding for the foreseeable future, it becomes critical to gain a fundamental understanding of several interconnected problems. First, what are the thermodynamic and kinetic factors affecting oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings could be used in order to gain valuable time at off-normal conditions (temperature exceeds ~1200°C (2200°F)? Thirdly, the kinetics of the coating's oxidation must be understood. Lastly, one needs automated inspection algorithms allowing identifying cladding's defects. This work attempts to explore the problem from a computational perspective, utilizing first principles atomistic simulations, computational thermodynamics, plasticity theory, and morphological algorithms of image processing for defect identification. It consists of the four parts dealing with these four problem areas preceded by the introduction. In the 1st part, computational thermodynamics and ab initio calculations were used to shed light upon the different stages of zircaloy oxidation and hydrogen pickup, and microstructure optimization to increase thermal stability. The 2 nd part describes the kinetic theory of oxidation of the several materials considered to be perspective coatings for Zr alloys: SiC and ZrSiO4. The 3rd part deals with understanding the respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning (at low T) and crystallographic slip (higher T's). For that goal, an advanced plasticity model was proposed. In the 4th part projectional algorithms for defect identification in zircaloy coatings are described. Conclusions and recommendations are presented in the 5th part. This integrative approach's value
Computational Methods for Multi-dimensional Neutron Diffusion Problems
Song Han
2009-10-15
Lead-cooled fast reactor (LFR) has potential for becoming one of the advanced reactor types in the future. Innovative computational tools for system design and safety analysis on such NPP systems are needed. One of the most popular trends is coupling multi-dimensional neutron kinetics (NK) with thermal-hydraulic (T-H) to enhance the capability of simulation of the NPP systems under abnormal conditions or during rare severe accidents. Therefore, various numerical methods applied in the NK module should be reevaluated to adapt the scheme of coupled code system. In the author's present work a neutronic module for the solution of two dimensional steady-state multigroup diffusion problems in nuclear reactor cores is developed. The module can produce both direct fluxes as well as adjoints, i.e. neutron importances. Different numerical schemes are employed. A standard finite-difference (FD) approach is firstly implemented, mainly to serve as a reference for less computationally challenging schemes, such as transverse-integrated nodal methods (TINM) and boundary element methods (BEM), which are considered in the second part of the work. The validation of the methods proposed is carried out by comparisons of the results for some reference structures. In particular a critical problem for a homogeneous reactor for which an analytical solution exists is considered as a benchmark. The computational module is then applied to a fast spectrum system, having physical characteristics similar to the proposed European Lead-cooled System (ELSY) project. The results show the effectiveness of the numerical techniques presented. The flexibility and the possibility to obtain neutron importances allow the use of the module for parametric studies, design assessments and integral parameter evaluations, as well as for future sensitivity and perturbation analyses and as a shape solver for time-dependent procedures
An, Gary; Nieman, Gary; Vodovotz, Yoram
2012-11-01
Sepsis accounts annually for nearly 10% of total U.S. deaths, costing nearly $17 billion/year. Sepsis is a manifestation of disordered systemic inflammation. Properly regulated inflammation allows for timely recognition and effective reaction to injury or infection, but inadequate or overly robust inflammation can lead to Multiple Organ Dysfunction Syndrome (MODS). There is an incongruity between the systemic nature of disordered inflammation (as the target of inflammation-modulating therapies), and the regional manifestation of organ-specific failure (as the subject of organ support), that presents a therapeutic dilemma: systemic interventions can interfere with an individual organ system's appropriate response, yet organ-specific interventions may not help the overall system reorient itself. Based on a decade of systems and computational approaches to deciphering acute inflammation, along with translationally-motivated experimental studies in both small and large animals, we propose that MODS evolves due to the feed-forward cycle of inflammation → damage → inflammation. We hypothesize that inflammation proceeds at a given, "nested" level or scale until positive feedback exceeds a "tipping point." Below this tipping point, inflammation is contained and manageable; when this threshold is crossed, inflammation becomes disordered, and dysfunction propagates to a higher biological scale (e.g., progressing from cellular, to tissue/organ, to multiple organs, to the organism). Finally, we suggest that a combination of computational biology approaches involving data-driven and mechanistic mathematical modeling, in close association with studies in clinically relevant paradigms of sepsis/MODS, are necessary in order to define scale-specific "tipping points" and to suggest novel therapies for sepsis.
Gregg, Chelsea L; Recknagel, Andrew K; Butcher, Jonathan T
2015-01-01
Tissue morphogenesis and embryonic development are dynamic events challenging to quantify, especially considering the intricate events that happen simultaneously in different locations and time. Micro- and more recently nano-computed tomography (micro/nanoCT) has been used for the past 15 years to characterize large 3D fields of tortuous geometries at high spatial resolution. We and others have advanced micro/nanoCT imaging strategies for quantifying tissue- and organ-level fate changes throughout morphogenesis. Exogenous soft tissue contrast media enables visualization of vascular lumens and tissues via extravasation. Furthermore, the emergence of antigen-specific tissue contrast enables direct quantitative visualization of protein and mRNA expression. Micro-CT X-ray doses appear to be non-embryotoxic, enabling longitudinal imaging studies in live embryos. In this chapter we present established soft tissue contrast protocols for obtaining high-quality micro/nanoCT images and the image processing techniques useful for quantifying anatomical and physiological information from the data sets.
Three-dimensional computed tomography of the acetabulum
Pozzi Mucelli, R.S.; Muner, G.; Pozzi Mucelli, F.; Pozzi Mucelli, M.; Marotti, F.; Dalla Palma, L.
1986-08-01
Acetabular fractures represent a complex variety that are classified in different types. Conventional radiology is often inadequate to demonstrate and classify the fractures. Computed tomography (CT) has already been shown to be superior in this field. A further advantage of CT is represented by the recent availability of three-dimensional (3D) images that are realized from axial CT scans by means of a new software. The Authors report the applications of this new software to the study of the normal acetabulum and in patients with fractures. 3D images allows an effective demonstration of the fracture, its irradiation and the dislocation of bone fragments. The information is contained in one or few images rather than many axial images. Therefore the role of 3D images may be considered complementary to axial CT scans.
High-definition three-dimensional television disparity map computation
Chammem, Afef; Mitrea, Mihai; Prêteux, Françoise
2012-10-01
By reconsidering some two-dimensional video inherited approaches and by adapting them to the stereoscopic video content and to the human visual system peculiarities, a new disparity map is designed. First, the inner relation between the left and the right views is modeled by some weights discriminating between the horizontal and vertical disparities. Second, the block matching operation is achieved by considering a visual related measure (normalized cross correlation) instead of the traditional pixel differences (mean squared error or sum of absolute differences). The advanced three-dimensional (3-D) video-new three step search (3DV-NTSS) disparity map (3-D Video-New Three Step Search) is benchmarked against two state-of-the-art algorithms, namely NTSS and full-search MPEG (FS-MPEG), by successively considering two corpora. The first corpus was organized during the 3DLive French national project and regroups 20 min of stereoscopic video sequences. The second one, with similar size, is provided by the MPEG community. The experimental results demonstrate the effectiveness of 3DV-NTSS in both reconstructed image quality (average gains between 3% and 7% in both PSNR and structural similarity, with a singular exception) and computational cost (search operation number reduced by average factors between 1.3 and 13). The 3DV-NTSS was finally validated by designing a watermarking method for high definition 3-D TV content protection.
Vikas Tomer; John Renaud
2010-08-31
temperature dependent strength and microstructural stability was also significantly depended upon the dispersion of new phases at grain boundaries. The material design framework incorporates high temperature creep and mechanical strength data in order to develop a collaborative multiscale framework of morphology optimization. The work also incorporates a computer aided material design dataset development procedure where a systematic dataset on material properties and morphology correlation could be obtained depending upon a material processing scientist's requirements. Two different aspects covered under this requirement are: (1) performing morphology related analyses at the nanoscale and at the microscale to develop a multiscale material design and analyses capability; (2) linking material behavior analyses with the developed design tool to form a set of material design problems that illustrate the range of material design dataset development that could be performed. Overall, a software based methodology to design microstructure of particle based ceramic nanocomposites has been developed. This methodology has been shown to predict changes in phase morphologies required for achieving optimal balance of conflicting properties such as minimal creep strain rate and high fracture strength at high temperatures. The methodology incorporates complex material models including atomistic approaches. The methodology will be useful to design materials for high temperature applications including those of interest to DoE while significantly reducing cost of expensive experiments.
Computational strategies for three-dimensional flow simulations on distributed computer systems
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang
2016-07-01
Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.
Farr, W. R.
1971-01-01
Using only a one-dimensional subscripted variable, a FORTRAN computer subprogram was developed to linearly interpolate tabulated data of functions of four or less variables. The primary motivation was for faster computation.
2008-04-15
Multiscale kinetic theories for flows of biaxial liquid crystal polymers Given the rising interests in the modeling of nanofluids of biaxial constituents...Newtonian Fluid Mechanics, 2006, 128(1): 44-61. 4. M. G. Forest, R. Zhou, and Q. Wang, Nano-rod suspension flows: a 2D Smoluchowski-Navier-Stokes...dynamics for rigid rod & platelet suspensions in strongly coupled coplanar linear flow and magnetic fields II: Kinetic theory, Physics of Fluids, 2006, 18
The Feasibility of Multiscale Modeling of Tunnel Fires Using FDS 6
Vermesi, Izabella; Colella, Francesco; Rein, Guillermo
2014-01-01
6. However, the simplifications that are made in this work require further investigation in order to take full advantage of the potential of this computational method. INTRODUCTION Multiscale modeling for tunnel flows and fires has previously been studied using RANS general purpose CFD software......-dimensional quantity. The present study aimed to analyze whether or not the multiscale modeling approach for tunnel fires could be successfully applied in Fire Dynamics Simulator 6 (FDS6), an open source, fire-specific CFD software [4] that is easily accessible to modeling specialists. METHOD The implementation...
Miyazawa, Yasumasa; Varlamov, Sergey M.; Miyama, Toru; Guo, Xinyu; Hihara, Tsutomu; Kiyomatsu, Keiji; Kachi, Misako; Kurihara, Yukio; Murakami, Hiroshi
2017-06-01
A multi-scale three-dimensional variational (MS-3DVAR) scheme is developed to assimilate high-resolution Himawari-8 sea surface temperature (SST) data for the first time into an operational ocean nowcast/forecast system targeting the North Western Pacific, JCOPE2. MS-3DVAR improves representation of the Kuroshio path south of Japan, its associated sea level variations, and temperature/salinity profiles south of Japan, the Kuroshio/Oyashio mixed water region, and the Japan Sea as compared to those of the products by the traditional single-scale 3DVAR. Validation results demonstrate that MS-3DVAR well assimilates the sparsely distributed in situ temperature and salinity profiles data by spreading the information over the large scale and by representing the detailed information near the measurement points. MS-3DVAR succeeds to assimilate the Himawari-8 SST product without noisy features caused by the cloud effects. We also find that MS-3DVAR is more effective for estimating oceanic conditions in regions with smaller mesoscale variability including the mixed water region and Japan Sea than in south of Japan.
Analysis of secondary coxarthrosis by three dimensional computed tomography
Hemmi, Osamu [Keio Univ., Tokyo (Japan). School of Medicine
1997-11-01
The majority of coxarthrosis in Japan is due to congenital dislocation of the hip and acetabular dysplasia. Until now coxarthrosis has been chiefly analyzed on the basis of anterior-posterior radiographs. By using three-dimensional (3D) CT, it was possible to analyze the morphological features of secondary coxarthrosis more accurately, and by using new computer graphics software, it was possible to display the contact area in the hip joint and observe changes associated with progression of the stages of the disease. There were 34 subjects (68 joints), and all of who were women. The CT data were read into a work station, and 3D reconstruction was achieved with hip surgery simulation software (SurgiPlan). Pelvic inclination, acetabular anteversion, seven parameters indicating the investment of the femoral head and two indicating the position of the hip joint in the pelvis were measured. The results showed that secondary coxarthrosis is characterized not only by lateral malposition of the hip joint according to the pelvic coordinates, but by anterior malposition as well. Many other measurements provided 3D information on the acetabular dysplasia. Many of them were correlated with the CE angle on plain radiographs. Furthermore, a strong correlation was not found between anterior and posterior acetabular coverage of the femoral head. In addition, SurgiPlan`s distance mapping function enabled 3D observation of the pattern of progression of arthrosis based on the pattern of progression of joint space narrowing. (author)
Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter
2007-01-01
We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....
Ana Belén Petro
2014-04-01
Full Text Available While the retinex theory aimed at explaining human color perception, its derivations have led to efficient algorithms enhancing local image contrast, thus permitting among other features, to "see in the shadows". Among these derived algorithms, Multiscale Retinex is probably the most successful center-surround image filter. In this paper, we offer an analysis and implementation of Multiscale Retinex. We point out and resolve some ambiguities of the method. In particular, we show that the important color correction final step of the method can be seriously improved. This analysis permits to come up with an automatic implementation of Multiscale Retinex which is as faithful as possible to the one described in the original paper. Overall, this implementation delivers excellent results and confirms the validity of Multiscale Retinex for image color restoration and contrast enhancement. Nevertheless, while the method parameters can be fixed, we show that a crucial choice must be left to the user, depending on the lightning condition of the image: the method must either be applied to each color independently if a color balance is required, or to the luminance only if the goal is to achieve local contrast enhancement. Thus, we propose two slightly different algorithms to deal with both cases.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
Tchelepi, Hamdi
2014-11-14
A multiscale linear-solver framework for the pressure equation associated with flow in highly heterogeneous porous formations was developed. The multiscale based approach is cast in a general algebraic form, which facilitates integration of the new scalable linear solver in existing flow simulators. The Algebraic Multiscale Solver (AMS) is employed as a preconditioner within a multi-stage strategy. The formulations investigated include the standard MultiScale Finite-Element (MSFE) andMultiScale Finite-Volume (MSFV) methods. The local-stage solvers include incomplete factorization and the so-called Correction Functions (CF) associated with the MSFV approach. Extensive testing of AMS, as an iterative linear solver, indicate excellent convergence rates and computational scalability. AMS compares favorably with advanced Algebraic MultiGrid (AMG) solvers for highly detailed three-dimensional heterogeneous models. Moreover, AMS is expected to be especially beneficial in solving time-dependent problems of coupled multiphase flow and transport in large-scale subsurface formations.
DMS: A Package for Multiscale Molecular Dynamics
Somogyi, Endre; Ortoleva, Peter J
2013-01-01
Advances in multiscale theory and computation provide a novel paradigm for simulating many-classical particle systems. The Deductive Multiscale Simulator (DMS) is a multiscale molecular dynamics (MD) program built on two of these advances, i.e., multiscale Langevin (ML) and multiscale factorization (MF). Both capture the coevolution of the the coarse-grained (CG) state and the microstate. This provides these methods with great efficiency over conventional MD. Neither involve the introduction of phenomenological governing equations for the CG state with attendant uncertainty in both their form of the governing equations and the data needed to calibrate them. The design and implementation of DMS as an open source computational platform is presented here. DMS is written in Python, uses Gromacs to achieve the microphase, and then advances the microstate via a CG-guided evolution. DMS uses MDAnalysis, a Python library for analyzing MD trajectories, to perform computations required to construct CG-related variables...
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.
2014-12-01
In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.
ANALYSIS OF MULTISCALE METHODS
Wei-nan E; Ping-bing Ming
2004-01-01
The heterogeneous multiscale method gives a general framework for the analysis of multiscale methods. In this paper, we demonstrate this by applying this framework to two canonical problems: The elliptic problem with multiscale coefficients and the quasicontinuum method.
Computer Generated Holography as a Three-Dimensional Display Medium
1990-12-01
series of two dimensional images are reflected on an object screen resulting in an autostereoscopic , or true three dimensional, images. The advantages of...an attractive target to optimize. Jack Ritter has suggested a fast approximation to 3D Euclidean distance calculations (10:432). His methid uses no
Min Yun, B; Aidun, Cyrus K; Yoganathan, Ajit P
2014-10-01
Bileaflet mechanical heart valves (BMHVs) are among the most popular prostheses to replace defective native valves. However, complex flow phenomena caused by the prosthesis are thought to induce serious thromboembolic complications. This study aims at employing a novel multiscale numerical method that models realistic sized suspended platelets for assessing blood damage potential in flow through BMHVs. A previously validated lattice-Boltzmann method (LBM) is used to simulate pulsatile flow through a 23 mm St. Jude Medical (SJM) Regent™ valve in the aortic position at very high spatiotemporal resolution with the presence of thousands of suspended platelets. Platelet damage is modeled for both the systolic and diastolic phases of the cardiac cycle. No platelets exceed activation thresholds for any of the simulations. Platelet damage is determined to be particularly high for suspended elements trapped in recirculation zones, which suggests a shift of focus in blood damage studies away from instantaneous flow fields and toward high flow mixing regions. In the diastolic phase, leakage flow through the b-datum gap is shown to cause highest damage to platelets. This multiscale numerical method may be used as a generic solver for evaluating blood damage in other cardiovascular flows and devices.
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.
2007-03-01
CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.
Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations
Darmawan, J. B. B.; Mungkasi, S.
2017-01-01
In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.
Multiscale modelling of evolving foams
Saye, R. I.; Sethian, J. A.
2016-06-01
We present a set of multi-scale interlinked algorithms to model the dynamics of evolving foams. These algorithms couple the key effects of macroscopic bubble rearrangement, thin film drainage, and membrane rupture. For each of the mechanisms, we construct consistent and accurate algorithms, and couple them together to work across the wide range of space and time scales that occur in foam dynamics. These algorithms include second order finite difference projection methods for computing incompressible fluid flow on the macroscale, second order finite element methods to solve thin film drainage equations in the lamellae and Plateau borders, multiphase Voronoi Implicit Interface Methods to track interconnected membrane boundaries and capture topological changes, and Lagrangian particle methods for conservative liquid redistribution during rearrangement and rupture. We derive a full set of numerical approximations that are coupled via interface jump conditions and flux boundary conditions, and show convergence for the individual mechanisms. We demonstrate our approach by computing a variety of foam dynamics, including coupled evolution of three-dimensional bubble clusters attached to an anchored membrane and collapse of a foam cluster.
THREE DIMENSIONAL MULTIPHASE COMPUTATIONAL FLUID DYNAMICS ANALYSIS OF VENTILATED SUPERCAVITATION
YANG Wugang; ZHANG Yuwen; YANG Jie; ZUO Liankai
2008-01-01
For some vehicles travelling through water, it is advantageous to cover the vehicle in a supercavity for the sake of reducing the drag acting on it. The method of artificial ventilation is most effective for generating and dominating the supercavity. This paper focuses on the numerical simulation of flow field around three dimensional body. The method is based on the multiphase computational fluid dynamics (CFD) model combined with the turbulence model and the full cavity model. The flow field of cavity is simulated by solving the compressible Navier-Stokes equations. The fundamental similarity parameters of ventilated supercavitaty flows that include cavitation number, Froude number, ventilation rate and drag coefficient are all investigated numerically in the case of steady flow and gravity field. We discuss the following simulations results in section 3: The variations of the cavitation number and the supercavity's relative diameter with ventilation rate (subtopic 3.1); The drag coefficient versus the cavitation number (subtopic 3.2); Deformation of supercavity axis caused by gravitational effect for three different fixed Froude numbers-2.8, 3.4, 4.2 (subtopic 3.3). In subtopic 3.2, we give the comparison results of the drag reduction ratio among numerical simulation and experiment conducted in hydrodynamic tunnel and towing tank respectively. In subtopic 3.3, we summarize our discussion of gravitational effect on the axis deformation of supercavity as follows: In the case of smaller Froude number, the inclination of the cavity axis increases monotonously with increasing horizontal length, and reaches its maximal value at the end of supercavity; This deformation can be almost completely negligible when the Froude number Fr>7. The comparisons with the experimental data in the hydrodynamic tunnel and the towing tank indicate that the present method is effective for predicting the flows around ventilated supercavity; that the numerical results is in good agreement
Analog computation through high-dimensional physical chaotic neuro-dynamics
Horio, Yoshihiko; Aihara, Kazuyuki
2008-07-01
Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.
Three dimensional computation of turbulent flow in meandering channels
Van Thinh Nguyen
2000-07-01
In this study a finite element calculation procedure together with two-equation turbulent model k-{epsilon} and mixing length are applied to the problem of simulating 3D turbulent flow in closed and open meandering channels. Near the wall a special approach is applied in order to overcome the weakness of the standard k-{epsilon} in the viscous sub-layer. A specialized shape function is used in the special near wall elements to capture accurately the strong variations of the mean flow variables in the viscosity-affected near wall region. Based on the analogy of water and air flows, a few characteristics of hydraulic problems can be examined in aerodynamic models, respectively. To study the relationships between an aerodynamic and a hydraulic model many experiments have been carried out by Federal Waterway Engineering and Research Institute of Karlsruhe, Germany. In order to test and examine the results of these physical models, an appropriated numerical model is necessary. The numerical mean will capture the limitations of the experimental setup. The similarity and the difference between an aerodynamic and a hydraulic model will be found out by the results of numerical computations and will be depicted in this study. Despite the presence of similarities between the flow in closed channels and the flow in open channels, it should be stated that the presence of a free surface in the open channel introduces serious complications to three dimensional computation. A new unknown, which represents the position of nodes on this free surface, is introduced. A special approach is required for solving this unknown. A procedure surface tracking is applied to the free surface boundary like a moving boundary. Grid nodes on the free surface are free to move in such a way that they belong to the spines, which are the generator lines to define the allowed motion of the nodes on the free surface. (orig.) [German] Die numerische Simulation ist heute ein wichtiges Hilfsmittel fuer die
Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich
2016-05-17
Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.
Kirubanandham, A.; Lujan-Regalado, I.; Vallabhaneni, R.; Chawla, N.
2016-11-01
Decreasing pitch size in electronic packaging has resulted in a drastic decrease in solder volumes. The Sn grain crystallography and fraction of intermetallic compounds (IMCs) in small-scale solder joints evolve much differently at the smaller length scales. A cross-sectional study limits the morphological analysis of microstructural features to two dimensions. This study utilizes serial sectioning technique in conjunction with electron backscatter diffraction to investigate the crystallographic orientation of both Sn grains and Cu6Sn5 IMCs in Cu/Pure Sn/Cu solder joints in three dimensional (3D). Quantification of grain aspect ratio is affected by local cooling rate differences within the solder volume. Backscatter electron imaging and focused ion beam serial sectioning enabled the visualization of morphology of both nanosized Cu6Sn5 IMCs and the hollow hexagonal morphology type Cu6Sn5 IMCs in 3D. Quantification and visualization of microstructural features in 3D thus enable us to better understand the microstructure and deformation mechanics within these small scale solder joints.
Numerical Computation of High Dimensional Solitons Via Drboux Transformation
ZixiangZHOU
1997-01-01
Darboux transformation gives explicit soliton solutions of nonlinear partial differential equations.Using numerical computation in each step of constructing Darboux transformation,one can get the graphs of the solitons practically,In n dimensions(n≥3),this method greatly increases the speed and deduces the memory usage of computation comparing to the software for algebraic computation.A technical problem concerning floating overflow is discussed.
Multiscale model reduction for shale gas transport in fractured media
Akkutlu, I. Y.
2016-05-18
In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work (Akkutlu et al. Transp. Porous Media 107(1), 235–260, 2015), where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method (Efendiev et al. J. Comput. Phys. 251, 116–135, 2013, 2015). In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. In Efendiev et al. (2015), we developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations aligned with a Cartesian fine grid. The approach in Efendiev et al. (2015) does not allow handling arbitrary fracture distributions. In this paper, we (1) consider arbitrary fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents the degrees of freedom needed to achieve a certain error threshold. Our approach is adaptive in a sense that the multiscale basis functions can be added in the regions of interest. Numerical results for two-dimensional problem are presented to demonstrate the efficiency of proposed approach. © 2016 Springer International Publishing Switzerland
Development of three-dimensional computed tomography system using TNRF2 of JRR-3M
Murata, Yutaka; Mochiki, Koh-ichi [Musashi Inst. of Tech., Tokyo (Japan); Matsubayashi, Masahito
1998-01-01
A three-dimensional filtering engine, a convolution engine, and a back projection engine were developed for real-time signal processing of three-dimensional computed tomography. The performance of the system was measured and through-put of 0.5 second per one cross sectional data processing was attained. (author)
3D Multiscale Modelling of Angiogenesis and Vascular Tumour Growth
Perfahl, H.
2012-11-01
We present a three-dimensional, multiscale model of vascular tumour growth, which couples nutrient/growth factor transport, blood flow, angiogenesis, vascular remodelling, movement of and interactions between normal and tumour cells, and nutrient-dependent cell cycle dynamics within each cell. We present computational simulations which show how a vascular network may evolve and interact with tumour and healthy cells. We also demonstrate how our model may be combined with experimental data, to predict the spatio-temporal evolution of a vascular tumour.
Cyclic Matching Pursuits with Multiscale Time-frequency Dictionaries
Sturm, Bob L.; Christensen, Mads Græsbøll
2010-01-01
We generalize cyclic matching pursuit (CMP), propose an orthogonal variant, and examine their performance using multiscale time-frequency dictionaries in the sparse approximation of signals. Overall, we find that the cyclic approach of CMP produces signal models that have a much lower approximation...... error than existing greedy iterative descent methods such as matching pursuit (MP), and are competitive with models found using orthogonal MP (OMP), and orthogonal least squares (OLS). This implies that CMP is a strong alternative to the more computationally complex approaches of OMP and OLS...... for modeling high-dimensional signals....
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.
Structure and computation of two-dimensional incompressible extended MHD
Grasso, D; Abdelhamid, H M; Morrison, P J
2016-01-01
A comprehensive study of a reduced version of Lust's equations, the extended magnetohydrodynamic (XMHD) model obtained from the two-fluid theory for electrons and ions with the enforcement of quasineutrality, is given. Starting from the Hamiltonian structure of the fully three-dimensional theory, a Hamiltonian two-dimensional incompressible four-field model is derived. In this way energy conservation along with four families of Casimir invariants are naturally obtained. The construction facilitates various limits leading to the Hamiltonian forms of Hall, inertial, and ideal MHD, with their conserved energies and Casimir invariants. Basic linear theory of the four-field model is treated, and the growth rate for collisionless reconnection is obtained. Results from nonlinear simulations of collisionless tearing are presented and interpreted using, in particular normal fields, a product of the Hamiltonian theory that gives rise to simplified equations of motion.
Structure and computation of two-dimensional incompressible extended MHD
Grasso, D.; Tassi, E.; Abdelhamid, H. M.; Morrison, P. J.
2017-01-01
A comprehensive study of the extended magnetohydrodynamic model obtained from the two-fluid theory for electrons and ions with the enforcement of quasineutrality is given. Starting from the Hamiltonian structure of the fully three-dimensional theory, a Hamiltonian two-dimensional incompressible four-field model is derived. In this way, the energy conservation along with four families of Casimir invariants is naturally obtained. The construction facilitates various limits leading to the Hamiltonian forms of Hall, inertial, and ideal MHD, with their conserved energies and Casimir invariants. Basic linear theory of the four-field model is treated, and the growth rate for collisionless reconnection is obtained. Results from nonlinear simulations of collisionless tearing are presented and interpreted using, in particular, normal fields, a product of the Hamiltonian theory that gives rise to simplified equations of motion.
Lyapunov Computational Method for Two-Dimensional Boussinesq Equation
Mabrouk, Anouar Ben
2010-01-01
A numerical method is developed leading to Lyapunov operators to approximate the solution of two-dimensional Boussinesq equation. It consists of an order reduction method and a finite difference discretization. It is proved to be uniquely solvable and analyzed for local truncation error for consistency. The stability is checked by using Lyapunov criterion and the convergence is studied. Some numerical implementations are provided at the end of the paper to validate the theoretical results.
Three-dimensional, computer simulated navigation in endoscopic neurosurgery
Roberta K. Sefcik, BHA
2017-06-01
Conclusion: Three-dimensional, frameless neuronavigation systems are useful in endoscopic neurosurgery to assist in the pre-operative planning of potential trajectories and to help localize the pathology of interest. Neuronavigation appears to be accurate to <1–2 mm without issues related to brain shift. Further work is necessary in the investigation of the effect of neuronavigation on operative time, cost, and patient-centered outcomes.
Computational method for the quantum Hamilton-Jacobi equation: one-dimensional scattering problems.
Chou, Chia-Chun; Wyatt, Robert E
2006-12-01
One-dimensional scattering problems are investigated in the framework of the quantum Hamilton-Jacobi formalism. First, the pole structure of the quantum momentum function for scattering wave functions is analyzed. The significant differences of the pole structure of this function between scattering wave functions and bound state wave functions are pointed out. An accurate computational method for the quantum Hamilton-Jacobi equation for general one-dimensional scattering problems is presented to obtain the scattering wave function and the reflection and transmission coefficients. The computational approach is demonstrated by analysis of scattering from a one-dimensional potential barrier. We not only present an alternative approach to the numerical solution of the wave function and the reflection and transmission coefficients but also provide a computational aspect within the quantum Hamilton-Jacobi formalism. The method proposed here should be useful for general one-dimensional scattering problems.
Multi-dimensional visualization of coronary aberrancies using multi detector computed tomography
van Ooijen, P; Dorgelo, J; Tukker, W; Willemsen, H; Oudkerk, M; Lewis, BS; Halon, DA; Flugelman, MY; Gensini, GF
2003-01-01
The purpose of this study was to evaluate the usefulness of three and four dimensional visualization of multi detector computed tomography datasets for the evaluation of coronary aberrancies. Multi detector computed tomography (MDCT) datasets were obtained in thirteen patients following a standard p
Computation of two-dimensional isothermal flow in shell-and-tube heat exchangers
Carlucci, L.N.; Galpin, P.F.; Brown, J.D.; Frisina, V.
1983-07-01
A computational procedure is outlined whereby two-dimensional isothermal shell-side flow distributions can be calculated for tube bundles having arbitrary boundaries and flow blocking devices, such as sealing strips, defined in arbitrary locations. The procedure is described in some detail and several computed results are presented to illustrate the robustness and generality of the method. 11 figs.
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.
Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling☆
Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.
2012-01-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple
Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Wei, Jun; Chughtai, Aamer; Sundaram, Baskaran; Hadjiiski, Lubomir M.; Patel, Smita; Kazerooni, Ella A.
2013-04-01
A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.
Hiller, Jochen; Reindl, Leonard M
2012-01-01
into account the main error sources for the measurement. This method has the potential to deal with all kinds of systematic and random errors that influence a dimensional CT measurement. A case study demonstrates the practical application of the VCT simulator using numerically generated CT data and statistical......The knowledge of measurement uncertainty is of great importance in conformance testing in production. The tolerance limit for production must be reduced by the amounts of measurement uncertainty to ensure that the parts are in fact within the tolerance. Over the last 5 years, industrial X......-ray computed tomography (CT) has become an important technology for dimensional quality control. In this paper a computer simulation platform is presented which is able to investigate error sources in dimensional CT measurements. The typical workflow in industrial CT metrology is described and methods...
Moment-based method for computing the two-dimensional discrete Hartley transform
Dong, Zhifang; Wu, Jiasong; Shu, Huazhong
2009-10-01
In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.
Multiscale expansions in discrete world
Ömer Ünsal; Filiz Taşcan; Mehmet Naci Özer
2014-07-01
In this paper, we show the attainability of KdV equation from some types of nonlinear Schrödinger equation by using multiscale expansions discretely. The power of this manageable method is confirmed by applying it to two selected nonlinear Schrödinger evolution equations. This approach can also be applied to other nonlinear discrete evolution equations. All the computations have been made with Maple computer packet program.
Three-Dimensional Computations of Multiple Tandem Jets in Crossflow
无
2006-01-01
The mixing and merging characteristics of multiple tandem jets in crossflow are investigated by use of the Computational Fluid Dynamics (CFD) code FLUENT. The realizable k～ε model is employed for turbulent closure of the Reynolds-averaged Navier-Stokes equations. Numerical experiments are performed for 1-, 2- and 4-jet groups, for jet-to-crossflow velocity ratios of R=4.2～16.3. The computed velocity and scalar concentration field are in good agreement with experiments using Particle Image Velocimetry (PIV) and Laser Induced Fluorescence (LIF), as well as previous work. The results show that the leading jet behavior is similar to a single free jet in crossflow, while all the downstream rear jets have less bent-over jet trajectories-suggesting a reduced ambient velocity for the rear jets. The concentration decay of the leading jet is greater than that of the rear jets. When normalized by appropriate crossflow momentum length scales, all jet trajectories follow a universal relation regardless of the sequential order of jet position and the number of jets. Supported by the velocity and trajectory measurements, the averaged maximum effective crossflow velocity ratio is computed to be in the range of 0.39 to 0.47.
A zero-dimensional approach to compute real radicals
Silke J. Spang
2008-04-01
Full Text Available The notion of real radicals is a fundamental tool in Real Algebraic Geometry. It takes the role of the radical ideal in Complex Algebraic Geometry. In this article I shall describe the zero-dimensional approach and efficiency improvement I have found during the work on my diploma thesis at the University of Kaiserslautern (cf. [6]. The main focus of this article is on maximal ideals and the properties they have to fulfil to be real. New theorems and properties about maximal ideals are introduced which yield an heuristic prepare_max which splits the maximal ideals into three classes, namely real, not real and the class where we can't be sure whether they are real or not. For the latter we have to apply a coordinate change into general position until we are sure about realness. Finally this constructs a randomized algorithm for real radicals. The underlying theorems and algorithms are described in detail.
Weighted Multi-Scale Image Matching Based on Harris-Sift Descriptor
Can Sun
2013-10-01
Full Text Available According to the rotational invariance of Harris corner detectorand the robustness of Sift descriptor. An improved Harris-Sift corner descriptor was proposed. At first, the algorithm given multi-scale strategy to Harris corner, improved corner counting method and removed redundant points at the same time, then, the corner was directly applied to low-pass Gaussian filter image. Based on the histogram of Sift feature descriptor, generates a new 128-dimensional feature vector descriptor by multi-scale Gauss weighted.Through the above, Harris corner detectorand Sift descriptorwas normalizedin the scale layer and gradient features. The experiment results indicated that, the improved corner descriptorcomprised both advantage of Harris corner detection and Sift feature descriptor. The method reduced the computation time and the false match rate, which could be validly applied to the robotstereo vision matching andthree-dimensional reconstruction.
Standard Model in multiscale theories and observational constraints
Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David
2016-08-01
We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*35 MeV . For α0=1 /2 , the Lamb shift alone yields t*450 GeV .
Three dimensional analysis of coelacanth body structure by computer graphics and X-ray CT images
Suzuki, Naoki (Jikei Univ., Tokyo (Japan). School of Medicine); Hamada, Takashi
1990-06-01
Three dimensional imaging processes were applied for the structural and functional analyses of the modern coelacanth (Latimeria chalumnae). Visualization of the obtained images is performed with computer graphics on the basis of serial images by an X-ray CT scanning method. Reconstruction of three dimensional images of the body structure of coelacanth using the volume rendering and surface rendering methods provides us various information about external and internal shapes of this exquisite fish. (author).
Numerical computation of the critical energy constant for two-dimensional Boussinesq equations
Kolkovska, N.; Angelow, K.
2015-10-01
The critical energy constant is of significant interest for the theoretical and numerical analysis of Boussinesq type equations. In the one-dimensional case this constant is evaluated exactly. In this paper we propose a method for numerical evaluation of this constant in the multi-dimensional cases by computing the ground state. Aspects of the numerical implementation are discussed and many numerical results are demonstrated.
1981-09-01
other enhanced versions such as XTABS and TABS77. The computer program ETABS (15) was released in 1975. The program allows three-dimensional frame...34A Program for Three-Dimensional Static and Dynamic Analysis of Multi- story Buildings," Structural Mechanics Software Series, Vol. II, University...Conference of Building Officials "Uniform Building Code," Whittier, California, 1979. 15. Wilson, E.L., Hollings, J.P., Dovey, H.H. " ETABS , Three
Adaptivity techniques for the computation of two-dimensional viscous flows using structured meshes
Szmelter, J.; Evans, A.; Weatherill, N. P.
In this paper three different adaptivity techniques have been investigated on the base of structured meshes. All the techniques indicate the significance of using adaptivity for improving computational results. In particular, the technique of combining point enrichment and node movement strategies offers the best compromise. Although, the work presented here used two-dimensional structured meshes, the techniques can be readily applied to hybrid and unstructured meshes. Also, preliminary three-dimensional numerical results have been already obtained by coauthors.
Intraoperative 3-Dimensional Computed Tomography and Navigation in Foot and Ankle Surgery.
Chowdhary, Ashwin; Drittenbass, Lisca; Dubois-Ferrière, Victor; Stern, Richard; Assal, Mathieu
2016-09-01
Computer-assisted orthopedic surgery has developed dramatically during the past 2 decades. This article describes the use of intraoperative 3-dimensional computed tomography and navigation in foot and ankle surgery. Traditional imaging based on serial radiography or C-arm-based fluoroscopy does not provide simultaneous real-time 3-dimensional imaging, and thus leads to suboptimal visualization and guidance. Three-dimensional computed tomography allows for accurate intraoperative visualization of the position of bones and/or navigation implants. Such imaging and navigation helps to further reduce intraoperative complications, leads to improved surgical outcomes, and may become the gold standard in foot and ankle surgery. [Orthopedics.2016; 39(5):e1005-e1010.]. Copyright 2016, SLACK Incorporated.
The planiverse computer contact with a two-dimensional world
Dewdney, Alexander Keewatin
2000-01-01
When The Planiverse ?rst appeared 16 years ago, it caught more than a few readers off guard. The line between willing suspension of dis- lief and innocent acceptance, if it exists at all, is a thin one. There were those who wanted to believe, despite the tongue-in-cheek subtext, that we had made contact with a two-dimensional world called Arde, a di- shaped planet embedded in the skin of a vast, balloon-shaped space called the planiverse. It is tempting to imagine that those who believed, as well as those who suspended disbelief, did so because of a persuasive consistency in the cosmology and physics of this in?nitesimally thin universe, and x preface to the millennium edition in its bizarre but oddly workable organisms. This was not just your r- of-the-mill universe fashioned out of the whole cloth of wish-driven imagination. The planiverse is a weirder place than that precisely - cause so much of it was “worked out” by a virtual team of scientists and technologists. Reality, even the pseudoreality of su...
Efficient computation method for two-dimensional nonlinear waves
无
2001-01-01
The theory and simulation of fully-nonlinear waves in a truncated two-dimensional wave tank in time domain are presented. A piston-type wave-maker is used to generate gravity waves into the tank field in finite water depth. A damping zone is added in front of the wave-maker which makes it become one kind of absorbing wave-maker and ensures the prescribed Neumann condition. The efficiency of nmerical tank is further enhanced by installation of a sponge layer beach (SLB) in front of downtank to absorb longer weak waves that leak through the entire wave train front. Assume potential flow, the space- periodic irrotational surface waves can be represented by mixed Euler- Lagrange particles. Solving the integral equation at each time step for new normal velocities, the instantaneous free surface is integrated following time history by use of fourth-order Runge- Kutta method. The double node technique is used to deal with geometric discontinuity at the wave- body intersections. Several precise smoothing methods have been introduced to treat surface point with high curvature. No saw-tooth like instability is observed during the total simulation.The advantage of proposed wave tank has been verified by comparing with linear theoretical solution and other nonlinear results, excellent agreement in the whole range of frequencies of interest has been obtained.
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
Sen, Oishik, E-mail: oishik-sen@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Davis, Sean, E-mail: sean.davis@mail.mcgill.ca [Aerospace Engineering, San Diego State University, San Diego, CA 92115 (United States); Jacobs, Gustaaf, E-mail: gjacobs@sdsu.edu [Aerospace Engineering, San Diego State University, San Diego, CA 92115 (United States); Udaykumar, H.S., E-mail: hs-kumar@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2015-08-01
The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver.
Solov'yov, Ilia; Yakubovich, Alexander V.; Nikolaev, Pavel V.;
2012-01-01
We present a multipurpose computer code MesoBioNano Explorer (MBN Explorer). The package allows to model molecular systems of varied level of complexity. In particular, MBN Explorer is suited to compute system's energy, to optimize molecular structure as well as to consider the molecular and random...... walk dynamics. MBN Explorer allows to use a broad variety of interatomic potentials, to model different molecular systems, such as atomic clusters, fullerenes, nanotubes, polypeptides, proteins, DNA, composite systems, nanofractals, and so on. A distinct feature of the program, which makes...
Wagner, Gregory John (Sandia National Laboratories, Livermore, CA); Collis, Samuel Scott; Templeton, Jeremy Alan (Sandia National Laboratories, Livermore, CA); Lehoucq, Richard B.; Parks, Michael L.; Jones, Reese E. (Sandia National Laboratories, Livermore, CA); Silling, Stewart Andrew; Scovazzi, Guglielmo; Bochev, Pavel B.
2007-10-01
This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.
Three Dimensional Computer Graphics Federates for the 2012 Smackdown Simulation
Fordyce, Crystal; Govindaiah, Swetha; Muratet, Sean; O'Neil, Daniel A.; Schricker, Bradley C.
2012-01-01
The Simulation Interoperability Standards Organization (SISO) Smackdown is a two-year old annual event held at the 2012 Spring Simulation Interoperability Workshop (SIW). A primary objective of the Smackdown event is to provide college students with hands-on experience in developing distributed simulations using High Level Architecture (HLA). Participating for the second time, the University of Alabama in Huntsville (UAHuntsville) deployed four federates, two federates simulated a communications server and a lunar communications satellite with a radio. The other two federates generated 3D computer graphics displays for the communication satellite constellation and for the surface based lunar resupply mission. Using the Light-Weight Java Graphics Library, the satellite display federate presented a lunar-texture mapped sphere of the moon and four Telemetry Data Relay Satellites (TDRS), which received object attributes from the lunar communications satellite federate to drive their motion. The surface mission display federate was an enhanced version of the federate developed by ForwardSim, Inc. for the 2011 Smackdown simulation. Enhancements included a dead-reckoning algorithm and a visual indication of which communication satellite was in line of sight of Hadley Rille. This paper concentrates on these two federates by describing the functions, algorithms, HLA object attributes received from other federates, development experiences and recommendations for future, participating Smackdown teams.
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code.
International Conference on Multiscale Methods and Partial Differential Equations.
Thomas Hou
2006-12-12
The International Conference on Multiscale Methods and Partial Differential Equations (ICMMPDE for short) was held at IPAM, UCLA on August 26-27, 2005. The conference brought together researchers, students and practitioners with interest in the theoretical, computational and practical aspects of multiscale problems and related partial differential equations. The conference provided a forum to exchange and stimulate new ideas from different disciplines, and to formulate new challenging multiscale problems that will have impact in applications.
一种多尺度自卷积快速算法%A Fast Computational Algorithm of Multi-Scale Autoconvolution
黄波; 赵晓晖; 庞怡杰; 时公涛; 陈东; 赵继印
2013-01-01
A fast computational method of the multi-scale autoconvolution(MSA)transform is proposed in this paper .In or-der to reduce the times of MSA transform ,the method deduces the smallest benchmark transform size according to the fast Fourier transform theory ,and replaces the different transform sizes of the same scale transform within the minimum range of MSA transform scale .Then ,for reducing the computational complexity ,this method reduces the MSA transform by using the MSA transform sym-metry outside the range of MSA transform scale .Several experiments on the aspects of time efficiency and accuracy of eigenvalue using typical sample data are given .The results demonstrate that computation speed of the fast proposed computational method is three times faster than that of the original method while maintaining eigenvalue accuracy .%本文提出了一种MSA变换的快速算法。根据快速傅里叶变换理论，在MSA变换尺度的最小取值范围内，推导出最小基准变换尺寸，以取代同一尺度变换的不同变换尺寸，减少MSA变换计算次数；此外，在MSA变换尺度的最小取值范围外，利用MSA变换的对称性进行尺度范围映射，减小MSA变换尺寸，降低计算复杂度。利用典型数据，从时间效率和特征值精度对算法进行仿真分析验证。实验表明，所提快速计算方法在保证特征值精度一致的前提下，计算速度提高到3倍以上。
Multiscale modeling methods in biomechanics.
Bhattacharya, Pinaki; Viceconti, Marco
2017-01-19
More and more frequently, computational biomechanics deals with problems where the portion of physical reality to be modeled spans over such a large range of spatial and temporal dimensions, that it is impossible to represent it as a single space-time continuum. We are forced to consider multiple space-time continua, each representing the phenomenon of interest at a characteristic space-time scale. Multiscale models describe a complex process across multiple scales, and account for how quantities transform as we move from one scale to another. This review offers a set of definitions for this emerging field, and provides a brief summary of the most recent developments on multiscale modeling in biomechanics. Of all possible perspectives, we chose that of the modeling intent, which vastly affect the nature and the structure of each research activity. To the purpose we organized all papers reviewed in three categories: 'causal confirmation,' where multiscale models are used as materializations of the causation theories; 'predictive accuracy,' where multiscale modeling is aimed to improve the predictive accuracy; and 'determination of effect,' where multiscale modeling is used to model how a change at one scale manifests in an effect at another radically different space-time scale. Consistent with how the volume of computational biomechanics research is distributed across application targets, we extensively reviewed papers targeting the musculoskeletal and the cardiovascular systems, and covered only a few exemplary papers targeting other organ systems. The review shows a research subdomain still in its infancy, where causal confirmation papers remain the most common. For further resources related to this article, please visit the WIREs website.
On the Computation of Degenerate Hopf Bifurcations for n-Dimensional Multiparameter Vector Fields
Michail P. Markakis
2016-01-01
Full Text Available The restriction of an n-dimensional nonlinear parametric system on the center manifold is treated via a new proper symbolic form and analytical expressions of the involved quantities are obtained as functions of the parameters by lengthy algebraic manipulations combined with computer assisted calculations. Normal forms regarding degenerate Hopf bifurcations up to codimension 3, as well as the corresponding Lyapunov coefficients and bifurcation portraits, can be easily computed for any system under consideration.
The possible usability of three-dimensional cone beam computed dental tomography in dental research
Yavuz, I.; Rizal, M. F.; Kiswanjaya, B.
2017-08-01
The innovations and advantages of three-dimensional cone beam computed dental tomography (3D CBCT) are continually growing for its potential use in dental research. Imaging techniques are important for planning research in dentistry. Newly improved 3D CBCT imaging systems and accessory computer programs have recently been proven effective for use in dental research. The aim of this study is to introduce 3D CBCT and open a window for future research possibilities that should be given attention in dental research.
Multiscale Cloud System Modeling
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Simulation of Multiphysics Multiscale Systems, 6th International Workshop
Krzhizhanovskaya, V.V.
2009-01-01
Modeling and Simulation of Multiphysics Multiscale Systems (SMMS) poses a grand challenge to computational science. To adequately simulate numerous intertwined processes characterized by different spatial and temporal scales spanning many orders of magnitude, sophisticated models and advanced comput
Simulation of Multiphysics Multiscale Systems, 7th International Workshop
Krzhizhanovskaya, V.
2010-01-01
Modeling and Simulation of Multiphysics Multiscale Systems (SMMS) poses a grand challenge to computational science. To adequately simulate numerous intertwined processes characterized by different spatial and temporal scales spanning many orders of magnitude, sophisticated models and advanced comput
Simulation of Multiphysics Multiscale Systems, 5th International Workshop
Krzhizhanovskaya, V.V.; Hoekstra, A.G.
2008-01-01
Modeling and Simulation of Multiphysics Multiscale Systems (SMMS) poses a grand challenge to computational science. To adequately simulate numerous intertwined processes characterized by different spatial and temporal scales spanning many orders of magnitude, sophisticated models and advanced comput
Al-Rawi, B.; Hassan, B.; Vandenberge, B.; Jacobs, R.
2010-01-01
The use of three-dimensional (3D) models of the dentition obtained from cone beam computed tomography (CBCT) is becoming increasingly more popular in dentistry. A recent trend is to replace the traditional dental casts with digital CBCT models for diagnosis, treatment planning and simulation. The ac
Shennib, H
1999-11-01
This article is a current update of the rationale for development of new enabling technologies in minimally invasive cardiac surgery. Specifically the potential advantages of three dimensional visualization, computer enhancement technology and robotics in performance of totally endoscopic coronary artery bypass grafts will be addressed.
Kruyt, N.P.; Esch, van B.P.M.; Jonker, J.B.
1999-01-01
A numerical method is presented for the computation of unsteady, three-dimensional potential flows in hydraulic pumps and turbines. The superelement method has been extended in order to eliminate slave degrees of freedom not only from the governing Laplace equation, but also from the Kutta condition
Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio
2014-01-01
Industrial applications of computed tomography (CT) for dimensional metrology on various components are fast increasing, owing to a number of favorable properties such as capability of non-destructive internal measurements. Uncertainty evaluation is however more complex than in conventional measu...
Three-Dimensional Computer Animated Graphics: A Tool for Spatial Skill Instruction.
Zavotka, Susan Lee
1987-01-01
Describes study of home economics students at Ohio State University that investigated whether computer animated graphics that replicate mental images of rotation and dimensional transformation would be useful in the development of spatial skills. Orthographic drawings are described, and results for treatment and control groups are analyzed. (29…
CAO Li-Na; WANG Deng-Shan; CHEN Lan-Xin
2007-01-01
In this paper,by using symbolic and algebra computation,Chen and Wang's multiple Riccati equations rational expansion method was further extended.Many double soliton-like and other novel combined forms of exact solutions of the (2+1 )-dimensional Breaking soliton equation are derived by using the extended multiple Riccati equations expansion method.
An Exploration of Three-Dimensional Integrated Assessment for Computational Thinking
Zhong, Baichang; Wang, Qiyun; Chen, Jie; Li, Yi
2016-01-01
Computational thinking (CT) is a fundamental skill for students, and assessment is a critical factor in education. However, there is a lack of effective approaches to CT assessment. Therefore, we designed the Three-Dimensional Integrated Assessment (TDIA) framework in this article. The TDIA has two aims: one was to integrate three dimensions…
Dodge, W. G.
1968-01-01
Computer program determines the forced vibration in three dimensional space of a multiple degree of freedom beam type structural system. Provision is made for the longitudinal axis of the analytical model to change orientation at any point along its length. This program is used by industries in which structural design dynamic analyses are performed.
Semidefinite characterization and computation of zero-dimensional real radical ideals
Lasserre, J.B.; Laurent, M.; Rostalski, P.
2008-01-01
For an ideal I⊆ℝ[x] given by a set of generators, a new semidefinite characterization of its real radical I(V ℝ(I)) is presented, provided it is zero-dimensional (even if I is not). Moreover, we propose an algorithm using numerical linear algebra and semidefinite optimization techniques, to compute
Quantum computing via defect states in two-dimensional antidot lattices.
Flindt, Christian; Mortensen, Niels Asger; Jauho, Antti-Pekka
2005-12-01
We propose a new structure suitable for quantum computing in a solid-state environment: designed defect states in antidot lattices superimposed on a two-dimensional electron gas at a semiconductor heterostructure. State manipulation can be obtained with gate control. Model calculations indicate that it is feasible to fabricate structures whose energy level structure is robust against thermal dephasing.
Mclean, J. D.; Randall, J. L.
1979-01-01
A system of computer programs for calculating three dimensional transonic flow over wings, including details of the three dimensional viscous boundary layer flow, was developed. The flow is calculated in two overlapping regions: an outer potential flow region, and a boundary layer region in which the first order, three dimensional boundary layer equations are numerically solved. A consistent matching of the two solutions is achieved iteratively, thus taking into account viscous-inviscid interaction. For the inviscid outer flow calculations, the Jameson-Caughey transonic wing program FLO 27 is used, and the boundary layer calculations are performed by a finite difference boundary layer prediction program. Interface programs provide communication between the two basic flow analysis programs. Computed results are presented for the NASA F8 research wing, both with and without distributed surface suction.
Development of training support system based on three-dimensional computer graphics technology
Kude, Akizumi; Hanafusa, Hidemitsu; Matsuoka, Yoshinori; Shirute, Ikuo [Institute of Nuclear Safety System Inc., Seika, Kyoto (Japan); Ogura, Kazuhide
1998-09-01
Recently, Virtual Reality (VR) technology has developed quickly, together with research conducted on various elemental and related technologies and research in various fields of its application. In particular, the development of computer graphics (CG) technology at the same pace as the progress in computer performance is remarkable. We have developed a new type of training support system using three-dimensional (3D) CG technology. It is the training support system for disassembling and assembling a motor-operated gate valve. The training support system proposed is based on a personal computer and can be used easily by anyone. The system configuration is outlined herein. (author)
Computer-assisted three-dimensional surgical planning and simulation: 3D virtual osteotomy.
Xia, J; Ip, H H; Samman, N; Wang, D; Kot, C S; Yeung, R W; Tideman, H
2000-02-01
A computer-assisted three-dimensional virtual osteotomy system for orthognathic surgery (CAVOS) is presented. The virtual reality workbench is used for surgical planning. The surgeon immerses in a virtual reality environment with stereo eyewear, holds a virtual "scalpel" (3D Mouse) and operates on a "real" patient (3D visualization) to obtain pre-surgical prediction (3D bony segment movements). Virtual surgery on a computer-generated 3D head model is simulated and can be visualized from any arbitrary viewing point in a personal computer system.
Quantum Computing - A new Implementation of Simon Algorithm for 3-Dimensional Registers
Adina Bărîlă
2015-03-01
Full Text Available Quantum computing is a new field of science aiming to use quantum phenomena in order to perform operations on data. The Simon algorithm is one of the quantum algorithms which solves a certain problem exponentially faster than any classical algorithm solving the same problem. Simulating of quantum algorithms is very important since quantum hardware is not available outside of the research labs. QCL (Quantum Computation Language is the most advanced implemented quantum computer simulator and was conceived by Bernhard Ömer. The paper presents an implementation in QCL of the Simon algorithm in the case of 3-dimensional registers.
A simpler and elegant algorithm for computing fractal dimension in higher dimensional state space
S Ghorui; A K Das; N Venkatramani
2000-02-01
Chaotic systems are now frequently encountered in almost all branches of sciences. Dimension of such systems provides an important measure for easy characterization of dynamics of the systems. Conventional algorithms for computing dimension of such systems in higher dimensional state space face an unavoidable problem of enormous storage requirement. Here we present an algorithm, which uses a simple but very powerful technique and faces no problem in computing dimension in higher dimensional state space. The unique indexing technique of hypercubes, used in this algorithm, provides a clever means to drastically reduce the requirement of storage. It is shown that theoretically this algorithm faces no problem in computing capacity dimension in any dimension of the embedding state space as far as the actual dimension of the attractor is ﬁnite. Unlike the existing algorithms, memory requirement offered by this algorithm depends only on the actual dimension of the attractor and has no explicit dependence on the number of data points considered.
Logan, Terry G.
1994-01-01
The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.
Yu Zou
2013-07-01
Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.
Scully, John R
2015-01-01
Recent advances in characterization tools, computational capabilities, and theories have created opportunities for advancement in understanding of solid-fluid interfaces at the nanoscale in corroding metallic systems. The Faraday Discussion on Corrosion Chemistry in 2015 highlighted some of the current needs, gaps and opportunities in corrosion science. Themes were organized into several hierarchical categories that provide an organizational framework for corrosion. Opportunities to develop fundamental physical and chemical data which will enable further progress in thermodynamic and kinetic modelling of corrosion were discussed. These will enable new and better understanding of unit processes that govern corrosion at the nanoscale. Additional topics discussed included scales, films and oxides, fluid-surface and molecular-surface interactions, selected topics in corrosion science and engineering as well as corrosion control. Corrosion science and engineering topics included complex alloy dissolution, local corrosion, and modelling of specific corrosion processes that are made up of collections of temporally and spatially varying unit processes such as oxidation, ion transport, and competitive adsorption. Corrosion control and mitigation topics covered some new insights on coatings and inhibitors. Further advances in operando or in situ experimental characterization strategies at the nanoscale combined with computational modelling will enhance progress in the field, especially if coupling across length and time scales can be achieved incorporating the various phenomena encountered in corrosion. Readers are encouraged to not only to use this ad hoc organizational scheme to guide their immersion into the current opportunities in corrosion chemistry, but also to find value in the information presented in their own ways.
Multicomponent and multiscale systems theory, methods, and applications in engineering
Geiser, Juergen
2016-01-01
This book examines the latest research results from combined multi-component and multi-scale explorations. It provides theory, considers underlying numerical methods, and presents brilliant computational experimentation. Engineering computations featured in this monograph further offer particular interest to many researchers, engineers, and computational scientists working in frontier modeling and applications of multicomponent and multiscale problems. Professor Geiser gives specific attention to the aspects of decomposing and splitting delicate structures and controlling decomposition and the rationale behind many important applications of multi-component and multi-scale analysis. Multicomponent and Multiscale Systems: Theory, Methods, and Applications in Engineering also considers the question of why iterative methods can be powerful and more appropriate for well-balanced multiscale and multicomponent coupled nonlinear problems. The book is ideal for engineers and scientists working in theoretical and a...
Wavelet-based method for computing elastic band gaps of one-dimensional phononic crystals
YAN; ZhiZhong; WANG; YueSheng
2007-01-01
A wavelet-based method was developed to compute elastic band gaps of one-dimensional phononic crystals. The wave field was expanded in the wavelet basis and an equivalent eigenvalue problem was derived in a matrix form involving the adaptive computation of integrals of the wavelets. The method was then applied to a binary system. For comparison, the elastic band gaps of the same one-di- mensional phononic crystals computed with the wavelet method and the well- known plane wave expansion (PWE) method are both presented in this paper. The numerical results of the two methods are in good agreement while the computation costs of the wavelet method are much lower than that of PWE method. In addition, the adaptability of wavelets makes the method possible for efficient band gap computation of more complex phononic structures.
1989-01-20
LLAA6 .l iI -SA/TR-2/89 A003: FINAL REPORT * COMPUTER ALGORITHMS AND ARCHITECTURES N FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION...Ciasuication) COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDD~j~~JRRN iv ummary Q PERSONAL AUTriOR(S) SBAHASCAE 1 3a. TYPE Of REPORT
Chen, Mounter C Y; Lu, Po-Chien; Chen, James S Y; Hwang, Ned H C
2005-01-01
Coronary stents are supportive wire meshes that keep narrow coronary arteries patent, reducing the risk of restenosis. Despite the common use of coronary stents, approximately 20-35% of them fail due to restenosis. Flow phenomena adjacent to the stent may contribute to restenosis. Three-dimensional computational fluid dynamics (CFD) and reconstruction based on biplane cine angiography were used to assess coronary geometry and volumetric blood flows. A patient-specific left anterior descending (LAD) artery was reconstructed from single-plane x-ray imaging. With corresponding electrocardiographic signals, images from the same time phase were selected from the angiograms for dynamic three-dimensional reconstruction. The resultant three-dimensional LAD artery at end-diastole was adopted for detailed analysis. Both the geometries and flow fields, based on a computational model from CAE software (ANSYS and CATIA) and full three-dimensional Navier-Stroke equations in the CFD-ACE+ software, respectively, changed dramatically after stent placement. Flow fields showed a complex three-dimensional spiral motion due to arterial tortuosity. The corresponding wall shear stresses, pressure gradient, and flow field all varied significantly after stent placement. Combined angiography and CFD techniques allow more detailed investigation of flow patterns in various segments. The implanted stent(s) may be quantitatively studied from the proposed hemodynamic modeling approach.
What is a Multiscale Problem in Molecular Dynamics?
Luigi Delle Site
2013-12-01
Full Text Available In this work, we make an attempt to answer the question of what a multiscale problem is in Molecular Dynamics (MD, or, more in general, in Molecular Simulation (MS. By introducing the criterion of separability of scales, we identify three major (reference categories of multiscale problems and discuss their corresponding computational strategies by making explicit examples of applications.
Multiscale modeling in biomechanics and mechanobiology
Hwang, Wonmuk; Kuhl, Ellen
2015-01-01
Presenting a state-of-the-art overview of theoretical and computational models that link characteristic biomechanical phenomena, this book provides guidelines and examples for creating multiscale models in representative systems and organisms. It develops the reader's understanding of and intuition for multiscale phenomena in biomechanics and mechanobiology, and introduces a mathematical framework and computational techniques paramount to creating predictive multiscale models. Biomechanics involves the study of the interactions of physical forces with biological systems at all scales – including molecular, cellular, tissue and organ scales. The emerging field of mechanobiology focuses on the way that cells produce and respond to mechanical forces – bridging the science of mechanics with the disciplines of genetics and molecular biology. Linking disparate spatial and temporal scales using computational techniques is emerging as a key concept in investigating some of the complex problems underlying these...
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
生物大分子多尺度理论和计算方法∗%Multiscale theory and computational metho d for biomolecule simulations
李文飞; 张建; 王骏; 王炜
2015-01-01
Molecular simulation is one of the most important ways of studying biomolecules. In the last two decades, by com-bining the molecular simulations with experiments, a number of key features of structure and dynamics of biomolecules have been revealed. Traditional molecular simulations often use the all-atom model or some coarse grained models. In practical applications, however, these all-atom models and coarse grained models encounter the bottlenecks in accuracy and eﬃciency, respectively, which hinder their applications to some extent. In recent years, the multiscale models have attracted much attention in the field of biomolecule simulations. In the multiscale model, the atomistic models and coarse grained models are combined together based on the principle of statistical physics, and thus the bottlenecks encountered in the traditional models can be overcome. The currently available multiscale models can be classified into four categories according to the coupling ways between the all-atom model and coarse gained model. They are 1) hybrid resolution multiscale model, 2) parallel coupling multiscale model, 3) one-way coupling multiscale model, and 4) self-learning multiscale model. All these multiscale strategies have achieved great success in certain aspects in the field of biomolecule simulations, including protein folding, aggregation, and functional motions of many kinds of protein machineries. In this review, we briefly introduce the above-mentioned four multiscale strategies, and the examples of their applications. We also discuss the limitations and advantages, as well as the application scopes of these multiscale methods. The directions for future work on improving these multiscale models are also suggested. Finally, a summary and some prospects are presented.%分子模拟是研究生物大分子的重要手段。过去二十年来，人们将分子模拟与实验研究相结合，揭示出生物大分子结构和动力学方面的诸多重要性质。
OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.
Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter
2011-10-01
The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.
Multiscale modeling and synaptic plasticity.
Bhalla, Upinder S
2014-01-01
Synaptic plasticity is a major convergence point for theory and computation, and the process of plasticity engages physiology, cell, and molecular biology. In its many manifestations, plasticity is at the hub of basic neuroscience questions about memory and development, as well as more medically themed questions of neural damage and recovery. As an important cellular locus of memory, synaptic plasticity has received a huge amount of experimental and theoretical attention. If computational models have tended to pick specific aspects of plasticity, such as STDP, and reduce them to an equation, some experimental studies are equally guilty of oversimplification each time they identify a new molecule and declare it to be the last word in plasticity and learning. Multiscale modeling begins with the acknowledgment that synaptic function spans many levels of signaling, and these are so tightly coupled that we risk losing essential features of plasticity if we focus exclusively on any one level. Despite the technical challenges and gaps in data for model specification, an increasing number of multiscale modeling studies have taken on key questions in plasticity. These have provided new insights, but importantly, they have opened new avenues for questioning. This review discusses a wide range of multiscale models in plasticity, including their technical landscape and their implications.
A multiscale modeling technique for bridging molecular dynamics with finite element method
Lee, Yongchang, E-mail: yl83@buffalo.edu; Basaran, Cemal
2013-11-15
In computational mechanics, molecular dynamics (MD) and finite element (FE) analysis are well developed and most popular on nanoscale and macroscale analysis, respectively. MD can very well simulate the atomistic behavior, but cannot simulate macroscale length and time due to computational limits. FE can very well simulate continuum mechanics (CM) problems, but has the limitation of the lack of atomistic level degrees of freedom. Multiscale modeling is an expedient methodology with a potential to connect different levels of modeling such as quantum mechanics, molecular dynamics, and continuum mechanics. This study proposes a new multiscale modeling technique to couple MD with FE. The proposed method relies on weighted average momentum principle. A wave propagation example has been used to illustrate the challenges in coupling MD with FE and to verify the proposed technique. Furthermore, 2-Dimensional problem has also been used to demonstrate how this method would translate into real world applications. -- Highlights: •A weighted averaging momentum method is introduced for bridging molecular dynamics (MD) with finite element (FE) method. •The proposed method shows excellent coupling results in 1-D and 2-D examples. •The proposed method successfully reduces the spurious wave reflection at the border of MD and FE regions. •Big advantages of the proposed method are simplicity and inexpensive computational cost of multiscale analysis.
Observations on the Performance of X-Ray Computed Tomography for Dimensional Metrology
Corcoran, H. C.; Brown, S. B.; Robson, S.; Speller, R. D.; McCarthy, M. B.
2016-06-01
X-ray computed tomography (XCT) is a rising technology within many industries and sectors with a demand for dimensional metrology, defect, void analysis and reverse engineering. There are many variables that can affect the dimensional metrology of objects imaged using XCT, this paper focusses on the effects of beam hardening due to the orientation of the workpiece, in this case a holeplate, and the volume of material the X-rays travel through. Measurements discussed include unidirectional and bidirectional dimensions, radii of cylinders, fit point deviations of the fitted shapes and cylindricity. Results indicate that accuracy and precision of these dimensional measurements are affected in varying amounts, both by the amount of material the X-rays have travelled through and the orientation of the object.
4-dimensional computer-based motion simulation after Total Hip Arthroplasty.
Otake, Yoshito; Hagio, Keisuke; Suzuki, Naoki; Hattori, Asaki; Sugano, Nobuhiko; Yonenobu, Kazuo; Ochi, Takahiro
2003-01-01
This paper represents a novel 4-dimensional(4D) computer-based motion simulation system for patients having had Total Hip Arthroplasty(THA). By constructing the skeletal model of the patient's lower extremity and measuring daily motions, we simulated the movement of the inner structures including the skeleton and the artificial joint. This system visually represents not only the 3-dimensional(3D) anatomical structure but also the 4-dimensional dynamic functions that represent the time sequential transitions of the position of each component. Clinicians can get detailed information of the movement of the hip joint quantitatively and give precise guidance for the patients with regard to postoperative daily motions. The measurement error was evaluated by performing experiments using OpenMRI and the results indicated sufficient accuracy of this system. We believe that this system enables clinicians to reveal the causes of complications after THA and encourages the development of new surgical techniques, materials. and designs of prostheses.
OBSERVATIONS ON THE PERFORMANCE OF X-RAY COMPUTED TOMOGRAPHY FOR DIMENSIONAL METROLOGY
H. C. Corcoran
2016-06-01
Full Text Available X-ray computed tomography (XCT is a rising technology within many industries and sectors with a demand for dimensional metrology, defect, void analysis and reverse engineering. There are many variables that can affect the dimensional metrology of objects imaged using XCT, this paper focusses on the effects of beam hardening due to the orientation of the workpiece, in this case a holeplate, and the volume of material the X-rays travel through. Measurements discussed include unidirectional and bidirectional dimensions, radii of cylinders, fit point deviations of the fitted shapes and cylindricity. Results indicate that accuracy and precision of these dimensional measurements are affected in varying amounts, both by the amount of material the X-rays have travelled through and the orientation of the object.
2014-09-30
1 Multiscale Data Assimilation Dr. Pierre F.J. Lermusiaux Department of Mechanical Engineering, Center for Ocean Science and Engineering...concerned with next-generation multiscale data assimilation , with a focus on shelfbreak regions, including non-hydrostatic effects. Our long-term...goals are to: - Develop and utilize GMM-DO data assimilation schemes for rigorous multiscale inferences, where observations provide information on
Fast point cloud registration algorithm using multiscale angle features
Lu, Jun; Guo, Congling; Fang, Ying; Xia, Guihua; Wang, Wanjia; Elahi, Ahsan
2017-05-01
To fulfill the demands of rapid and real-time three-dimensional optical measurement, a fast point cloud registration algorithm using multiscale axis angle features is proposed. The key point is selected based on the mean value of scalar projections of the vectors from the estimated point to the points in the neighborhood on the normal of the estimated point. This method has a small amount of computation and good discriminating ability. A rotation invariant feature is proposed using the angle information calculated based on multiscale coordinate axis. The feature descriptor of a key point is computed using cosines of the angles between corresponding coordinate axes. Using this method, the surface information around key points is obtained sufficiently in three axes directions and it is easy to recognize. The similarity of descriptors is employed to quickly determine the initial correspondences. The rigid spatial distance invariance and clustering selection method are used to make the corresponding relationships more accurate and evenly distributed. Finally, the rotation matrix and translation vector are determined using the method of singular value decomposition. Experimental results show that the proposed algorithm has high precision, fast matching speed, and good antinoise capability.
Efendiev, Yalchin R.
2015-06-05
In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.
Lopreore, Courtney L; Bartol, Thomas M; Coggan, Jay S; Keller, Daniel X; Sosinsky, Gina E; Ellisman, Mark H; Sejnowski, Terrence J
2008-09-15
A computational model is presented for the simulation of three-dimensional electrodiffusion of ions. Finite volume techniques were used to solve the Poisson-Nernst-Planck equation, and a dual Delaunay-Voronoi mesh was constructed to evaluate fluxes of ions, as well as resulting electric potentials. The algorithm has been validated and applied to a generalized node of Ranvier, where numerical results for computed action potentials agree well with cable model predictions for large clusters of voltage-gated ion channels. At smaller channel clusters, however, the three-dimensional electrodiffusion predictions diverge from the cable model predictions and show a broadening of the action potential, indicating a significant effect due to each channel's own local electric field. The node of Ranvier complex is an elaborate organization of membrane-bound aqueous compartments, and the model presented here represents what we believe is a significant first step in simulating electrophysiological events with combined realistic structural and physiological data.
Subramanian, S. V.; Bozzola, R.; Povinelli, L. A.
1986-01-01
The performance of a three dimensional computer code developed for predicting the flowfield in stationary and rotating turbomachinery blade rows is described in this study. The four stage Runge-Kutta numerical integration scheme is used for solving the governing flow equations and yields solution to the full, three dimensional, unsteady Euler equations in cylindrical coordinates. This method is fully explicit and uses the finite volume, time marching procedure. In order to demonstrate the accuracy and efficiency of the code, steady solutions were obtained for several cascade geometries under widely varying flow conditions. Computed flowfield results are presented for a fully subsonic turbine stator and a low aspect ratio, transonic compressor rotor blade under maximum flow and peak efficiency design conditions. Comparisons with Laser Anemometer measurements and other numerical predictions are also provided to illustrate that the present method predicts important flow features with good accuracy and can be used for cost effective aerodynamic design studies.
Chen, Y. S.
1986-03-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Three-dimensional multigrid Navier-Stokes computations for turbomachinery applications
Subramanian, S. V.
1989-01-01
The fully three-dimensional, time-dependent compressible Navier-Stokes equations in cylindrical coordinates are presently used, in conjunction with the multistage Runge-Kutta numerical integration scheme for solution of the governing flow equations, to simulate complex flowfields within turbomechanical components whose pertinent effects encompass those of viscosity, compressibility, blade rotation, and tip clearance. Computed results are presented for selected cascades, emphasizing the code's capabilities in the accurate prediction of such features as airfoil loadings, exit flow angles, shocks, and secondary flows. Computations for several test cases have been performed on a Cray-YMP, using nearly 90,000 grid points.
A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium
Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand
2014-05-01
The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.
Snyder, Abigail C. [University of Pittsburgh; Jiao, Yu [ORNL
2010-10-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Pelo, Sandro; Correra, Pasquale; Danza, Francesco Maria; Amenta, Alessia; Gasparini, Giulio; Marianetti, Tito M; Moro, Alessandro
2012-07-01
Since the introduction of Roentgen rays in medical diagnostics, mummies have been subjected to radiographic and cephalometric studies. These have, among others, the advantage of providing details that are not directly visible for inspection without the need to tamper with the relics. The acquisition of three-dimensional imaging techniques has also extended the possibility of noninvasive investigation, so that many famous mummies, such as those of Tutankhamun and Ramses II, underwent three-dimensional computed tomography (CT). Computed tomography scan of Egyptian mummy of a 20- to 30-year-old woman found in Fayum and dating from the second century B.C. has been performed. DICOM data of the CT scan have been processed by means of a software for three-dimensional CT imaging processing. The purpose of this report was to present the somatic and skeletal characteristics of the mummy. Thanks to the image processing, a "virtual reconstruction" of the original facial features of the mummy has been obtained, and a reliable cephalometric tracing could be performed. The data derived from cephalometric tracings were similar to those published on other studies on a group of Egyptian mummies and on a sample of Iowa adult males. In our opinion, three-dimensional image processing of CT scan is useful to perform noninvasive morphologic investigations on archeological find, to allow virtual correction of postmortem artifact and to perform reliable cephalometric tracings.
User’s Guide: Computer Program for Three-Dimensional Analysis of Building Systems (CTABS80).
1981-08-01
version of TABS and is intended to supercede other enhanced versions such as XTABS and TABS77. The computer program ETABS (15) was released in 1975...Static and Dynamic Analysis of Multi- story Buildings," Structural Mechanics Software Series, Vol. II, University Press of Virginia, 1978. 4. Peterson, F.E...Building Code," Whittier, California, 1979. 15. Wilson, E.L., Hollings, J.P., Dovey, H.H. " ETABS , Three Dimensional Analysis of Building Systems
Computed tomography of Crohn’s disease: The role of three dimensional technique
Raman, Siva P.; Horton, Karen M.; Fishman, Elliot K.
2013-01-01
Crohn’s disease, a transmural inflammatory bowel disease, remains a difficult entity to diagnose clinically. Over the last decade, multidetector computed tomography (CT) has become the method of choice for non-invasive evaluation of the small bowel, and has proved to be of significant value in the diagnosis of Crohn’s disease. Advancements in CT enterography protocol design, three dimensional (3-D) post-processing software, and CT scanner technology have allowed increasing accuracy in diagnos...
Decker, Arthur J.; Izen, Steven H.
1992-01-01
A theory to determine the properties of a fluid from measurements of its projections was developed and tested. Viewing cones as small as 10 degrees were evaluated, with the only assumption being that the property was space limited. The results of applying the theory to numerical and actual interferograms of a spherical discontinuity of refractive index are presented. The theory was developed to test the practicality and limits of using three dimensional computer tomography in internal fluid dynamics.
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
Adly, A. A.; Hanafy, H. H.
2009-04-01
It is well known that transformer inrush currents depend upon the core properties, residual flux, switching instant, and the overall circuit parameters. Large transient inrush currents introduce abnormal electromagnetic forces which may destroy the transformer windings. This paper presents an approach through which core hysteresis may be incorporated in three-dimensional computations of transformer inrush current forces. Details of the approach, measurements, and simulations for a shell-type transformer are given in the paper.
Alexey V. Silin
2013-09-01
Full Text Available Three-dimensional Computed Tomography (CT is used to analyze the topography of the root canal anatomy in order to make a decision in choosing root canal preparation method. There is a method that make extracted teeth transparent maintaining its anatomical shape and size. It is important to compare the accuracy of the two visualization methods. Objective: To compare the transmission accuracy of root canals anatomy by two visualization methods, three-dimensional CT and transparent tooth preparation. Methods: Mandibular third molar was used as sample. The three-dimensional CT scan was performed before extracting the teeth. Then teeth were extracted and placed in solutions that made them transparent. Results: Despite the fact that the character of dental origin in terms of the angle of the crown, the curvature of the crown, the deviation of the root showed in three-dimensional CT, other tooth anatomical parameters were equally well detected using a transparent tooth preparation. Overall, the curvature of the root canals is more clearly shown by transparent tooth preparation. Conclusion: Transparent tooth preparation provides superior visualization of real root canal anatomy compared to three-dimensional CT, however the later could give us more information about the relation of the tooth within the jaw.DOI: 10.14693/jdi.v20i2.152
Cheng, J Y; Chahine, G L
2001-12-01
The slender body theory, lifting surface theories, and more recently panel methods and Navier-Stokes solvers have been used to study the hydrodynamics of fish swimming. This paper presents progress on swimming hydrodynamics using a boundary integral equation method (or boundary element method) based on potential flow model. The unsteady three-dimensional BEM code 3DynaFS that we developed and used is able to model realistic body geometries, arbitrary movements, and resulting wake evolution. Pressure distribution over the body surface, vorticity in the wake, and the velocity field around the body can be computed. The structure and dynamic behavior of the vortex wakes generated by the swimming body are responsible for the underlying fluid dynamic mechanisms to realize the high-efficiency propulsion and high-agility maneuvering. Three-dimensional vortex wake structures are not well known, although two-dimensional structures termed 'reverse Karman Vortex Street' have been observed and studied. In this paper, simulations about a swimming saithe (Pollachius virens) using our BEM code have demonstrated that undulatory swimming reduces three-dimensional effects due to substantially weakened tail tip vortex, resulting in a reverse Karman Vortex Street as the major flow pattern in the three-dimensional wake of an undulating swimming fish.
Cline, M.C.
1981-08-01
VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
Virtual Bernese osteotomy using three-dimensional computed tomography in hip dysplasia.
Suh, Dong Hun; Lee, Dae Hee; Jeong, Woong Kyo; Park, Sang Won; Kang, Chang Ho; Lee, Soon Hyuck
2012-04-01
Accurate assessment of acetabular morphology and its relationship to the femoral head is essential for planning a periacetabular osteotomy. We observed the acetabular coverage after virtual Bernese osteotomy using computer-aided technique. Three-dimensional computed tomography of 18 normal hips and 3 symptomatic dysplastic hips were analyzed. Through the center of the femoral head, vertical images were obtained at 10° intervals from 0° to 180° of rotation, using multiplanar reformation technique. Subsequently we measured 19 center-edge angles (CEAs) from each acetabulum. Four types of virtual osteotomy were performed on the three dysplastic hips. The adequacy of acetabular coverage after osteotomy was determined by comparing CEAs after correction with normal CEAs. Pearson correlation coefficients between the CEAs measured from normal cases and postoperative cases after lateral rotation of osteotomized fragments were 0.906 in case 1, 0.975 in case 2, 0.976 in case 3. Additional anterior rotation increased anterior acetabular coverage and simultaneously decreased posterior coverage in all three cases. Computer-aided virtual surgery technique based on three-dimensional computed tomography information enabled acetabular coverage to be quantified preoperatively in Bernese osteotomy. Lateral rotation of osteotomized acetabular fragments improved anterior and posterior coverage as well as lateral coverage.
Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction
Ghasemi, Mohammadreza
2015-02-23
In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.
The Quantum Socket: Three-Dimensional Wiring for Extensible Quantum Computing
Béjanin, J H; Rinehart, J R; Earnest, C T; McRae, C R H; Shiri, D; Bateman, J D; Rohanizadegan, Y; Penava, B; Breul, P; Royak, S; Zapatka, M; Fowler, A G; Mariantoni, M
2016-01-01
Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: The quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum ...
Malkov, Ewgenij A.; Poleshkin, Sergey O.; Kudryavtsev, Alexey N.; Shershnev, Anton A.
2016-10-01
The paper presents the software implementation of the Boltzmann equation solver based on the deterministic finite-difference method. The solver allows one to carry out parallel computations of rarefied flows on a hybrid computational cluster with arbitrary number of central processor units (CPU) and graphical processor units (GPU). Employment of GPUs leads to a significant acceleration of the computations, which enables us to simulate two-dimensional flows with high resolution in a reasonable time. The developed numerical code was validated by comparing the obtained solutions with the Direct Simulation Monte Carlo (DSMC) data. For this purpose the supersonic flow past a flat plate at zero angle of attack is used as a test case.
Three-dimensional flow past rotating wing at low Reynolds number: a computational study
Ruifeng, Hu, E-mail: rfhu@xidian.edu.cn [School of Mechano-Electronic Engineering, Xidian University, Xi’an 710071 (China)
2015-08-15
In this work, we performed a computational study on the three-dimensional (3D) flow past a rotating wing at a low Reynolds number (Re = 200). The 3D vortical structures and aerodynamic performances of the rotating wing with different aspect ratios and rotating speeds are computed and analyzed. A quasi-steady model is adopted for prediction of aerodynamic performances of the wing, and its applicability is evaluated by the computation. It is found that there exists a periodic vortex shedding pattern at a low rotating speed, while vortices may cluster near the wing when rotating speed is high enough. The wake vortex topology is also affected by the aspect ratio. The current quasi-steady aerodynamic model could only be used for rotating wing aerodynamics at a low rotating speed when regularly periodic vortex shedding exists. (paper)
Genton, Marc G.
2017-09-07
We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.
Examining Multiscale Movement Coordination in Collaborative Problem Solving
Wiltshire, Travis; Steffensen, Sune Vork
2017-01-01
During collaborative problem solving (CPS), coordination occurs at different spatial and temporal scales. This multiscale coordination should, at least on some scales, play a functional role in facilitating effective collaboration outcomes. To evaluate this, we conducted a study of computer...
Oikawa, Takaaki; Sonoda, Jun; Sato, Motoyuki; Honma, Noriyasu; Ikegawa, Yutaka
Analysis of lightning electromagnetic field using the FDTD method have been studied in recent year. However, large-scale three-dimensional analysis on real environment have not been considered, because the FDTD method has huge computational cost on large-scale analysis. So we have proposed a three-dimensional moving window FDTD (MW-FDTD) method with parallel computation. Our method use few computational cost than the conventional FDTD method and the original MW-FDTD method. In this paper, we have studied about computation performance of MW-FDTD parallel computation and large-scale three-dimensional analysis of lightning electromagnetic field on a real terrain model using our MW-FDTD with parallel computation.
Eissing, Thomas; Kuepfer, Lars; Becker, Corina; Block, Michael; Coboeken, Katrin; Gaub, Thomas; Goerlitz, Linus; Jaeger, Juergen; Loosen, Roland; Ludewig, Bernd; Meyer, Michaela; Niederalt, Christoph; Sevestre, Michael; Siegmund, Hans-Ulrich; Solodenko, Juri; Thelen, Kirstin; Telle, Ulrich; Weiss, Wolfgang; Wendl, Thomas; Willmann, Stefan; Lippert, Joerg
2011-01-01
Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim(®) and MoBi(®) capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug-drug, or drug-metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach.
Thomas eEissing
2011-02-01
Full Text Available Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multi-scale by nature, project work and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug-drug or drug-metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach.
High performance computing for three-dimensional agent-based molecular models.
Pérez-Rodríguez, G; Pérez-Pérez, M; Fdez-Riverola, F; Lourenço, A
2016-07-01
Agent-based simulations are increasingly popular in exploring and understanding cellular systems, but the natural complexity of these systems and the desire to grasp different modelling levels demand cost-effective simulation strategies and tools. In this context, the present paper introduces novel sequential and distributed approaches for the three-dimensional agent-based simulation of individual molecules in cellular events. These approaches are able to describe the dimensions and position of the molecules with high accuracy and thus, study the critical effect of spatial distribution on cellular events. Moreover, two of the approaches allow multi-thread high performance simulations, distributing the three-dimensional model in a platform independent and computationally efficient way. Evaluation addressed the reproduction of molecular scenarios and different scalability aspects of agent creation and agent interaction. The three approaches simulate common biophysical and biochemical laws faithfully. The distributed approaches show improved performance when dealing with large agent populations while the sequential approach is better suited for small to medium size agent populations. Overall, the main new contribution of the approaches is the ability to simulate three-dimensional agent-based models at the molecular level with reduced implementation effort and moderate-level computational capacity. Since these approaches have a generic design, they have the major potential of being used in any event-driven agent-based tool. Copyright © 2016 Elsevier Inc. All rights reserved.
Image analysis and superimposition of 3-dimensional cone-beam computed tomography models
Cevidanes, Lucia H. S.; Styner, Martin A.; Proffit, William R.
2013-01-01
Three-dimensional (3D) imaging techniques can provide valuable information to clinicians and researchers. But as we move from traditional 2-dimensional (2D) cephalometric analysis to new 3D techniques, it is often necessary to compare 2D with 3D data. Cone-beam computed tomography (CBCT) provides simulation tools that can help bridge the gap between image types. CBCT acquisitions can be made to simulate panoramic, lateral, and posteroanterior cephalometric radioagraphs so that they can be compared with preexisting cephalometric databases. Applications of 3D imaging in orthodontics include initial diagnosis and superimpositions for assessing growth, treatment changes, and stability. Three-dimensional CBCT images show dental root inclination and torque, impacted and supernumerary tooth positions, thickness and morphology of bone at sites of mini-implants for anchorage, and osteotomy sites in surgical planning. Findings such as resorption, hyperplasic growth, displacement, shape anomalies of mandibular condyles, and morphological differences between the right and left sides emphasize the diagnostic value of computed tomography acquisitions. Furthermore, relationships of soft tissues and the airway can be assessed in 3 dimensions. PMID:16679201
Tay, Shian-Chao; Berger, Richard A. [Mayo Clinic College of Medicine, Orthopedics Biomechanics Laboratory, Rochester, MN (United States); Primak, Andrew N.; Amrami, Kimberly K. [Mayo Clinic College of Medicine, Department of Radiology, Rochester, MN (United States); Fletcher, Joel G.; McCollough, Cynthia H. [Mayo Clinic College of Medicine, Department of Radiology, Rochester, MN (United States); Mayo Clinic College of Medicine, CT Innovation Center, Rochester, MN (United States); Schmidt, Bernhard [Siemens Medical Solutions, Forchheim (Germany)
2007-12-15
High-resolution real-time three-dimensional (3D) imaging of the moving wrist may provide novel insights into the pathophysiology of joint instability. The purpose of this work was to assess the feasibility of using retrospectively gated spiral computed tomography (CT) to perform four-dimensional (4D) imaging of the moving wrist joint. A cadaver forearm from below the elbow was mounted on a motion simulator which performed radioulnar deviation of the wrist at 30 cycles per minute. An electronic trigger from the simulator provided the ''electrocardiogram'' (ECG) signal required for gated reconstructions. Four-dimensional and 3D images were compared by a blinded observer for image quality and presence of artifacts. Image quality of 4D images was found to be excellent at the extremes of radial and ulnar deviation (end-motion phases). Some artifacts were seen in mid-motion phases. 4D CT musculoskeletal imaging is feasible. Four-dimensional CT may allow clinicians to assess functional (dynamic) instabilities of the wrist joint. (orig.)
The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics
Towne, Charles E.; Schwab, John R.
1992-01-01
An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.
Multiscale phenomenology of the cosmic web
Aragón-Calvo, Miguel A.; van de Weygaert, Rien; Jones, Bernard J. T.
2010-01-01
We analyse the structure and connectivity of the distinct morphologies that define the cosmic web. With the help of our multiscale morphology filter (MMF), we dissect the matter distribution of a cosmological Lambda cold dark matter N-body computer simulation into cluster, filaments and walls. The M
Multiscale phenomenology of the cosmic web
Aragón-Calvo, Miguel A.; van de Weygaert, Rien; Jones, Bernard J. T.
2010-01-01
We analyse the structure and connectivity of the distinct morphologies that define the cosmic web. With the help of our multiscale morphology filter (MMF), we dissect the matter distribution of a cosmological Lambda cold dark matter N-body computer simulation into cluster, filaments and walls. The
Three-Dimensional Object Motion and Velocity Estimation Using a Single Computational RGB-D Camera
Seungwon Lee
2015-01-01
Full Text Available In this paper, a three-dimensional (3D object moving direction and velocity estimation method is presented using a dual off-axis color-filtered aperture (DCA-based computational camera. Conventional object tracking methods provided only two-dimensional (2D states of an object in the image for the target representation. The proposed method estimates depth information in the object region from a single DCA camera that transforms 2D spatial information into 3D model parameters of the object. We also present a calibration method of the DCA camera to estimate the entire set of camera parameters for a practical implementation. Experimental results show that the proposed DCA-based color and depth (RGB-D camera can calculate the 3D object moving direction and velocity of a randomly moving object in a single-camera framework.
Three-Dimensional Computed Tomography as a Method for Finding Die Attach Voids in Diodes
Brahm, E. N.; Rolin, T. D.
2010-01-01
NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. The diode is an EEE part critical to NASA missions that can fail due to excessive voiding in the die attach. Metallography, one established method for studying the die attach, is a time-intensive, destructive, and equivocal process whereby mechanical grinding of the diodes is performed to reveal voiding in the die attach. Problems such as die attach pull-out tend to complicate results and can lead to erroneous conclusions. The objective of this study is to determine if three-dimensional computed tomography (3DCT), a nondestructive technique, is a viable alternative to metallography for detecting die attach voiding. The die attach voiding in two- dimensional planes created from 3DCT scans was compared to several physical cross sections of the same diode to determine if the 3DCT scan accurately recreates die attach volumetric variability
Moon, Inkyu; Javidi, Bahram
2009-03-15
We present a statistical approach to recognize three-dimensional (3D) objects with a small number of photons captured by using integral imaging (II). For 3D recognition of the events, the photon-limited elemental image set of a 3D object is obtained using the II technique. A computational geometrical ray propagation algorithm and the parametric maximum likelihood estimator are applied to the photon-limited elemental image set to reconstruct the irradiance of the original 3D scene voxels. The sampling distributions for the statistical parameters of the reconstructed image are determined. Finally, hypothesis testing for the equality of the statistical parameters between reference and input data sets is performed for statistical classification of populations on the basis of sampling distribution information. It is shown that large data sets of photon-limited 3D images can be converted into sampling distributions with their own statistical parameters, resulting in a substantial data dimensionality reduction for processing.
Three-dimensional object motion and velocity estimation using a single computational RGB-D camera.
Lee, Seungwon; Jeong, Kyungwon; Park, Jinho; Paik, Joonki
2015-01-08
In this paper, a three-dimensional (3D) object moving direction and velocity estimation method is presented using a dual off-axis color-filtered aperture (DCA)-based computational camera. Conventional object tracking methods provided only two-dimensional (2D) states of an object in the image for the target representation. The proposed method estimates depth information in the object region from a single DCA camera that transforms 2D spatial information into 3D model parameters of the object. We also present a calibration method of the DCA camera to estimate the entire set of camera parameters for a practical implementation. Experimental results show that the proposed DCA-based color and depth (RGB-D) camera can calculate the 3D object moving direction and velocity of a randomly moving object in a single-camera framework.
Meoli, Alessio; Cutrì, Elena; Krishnamurthy, Adarsh; Dubini, Gabriele; Migliavacca, Francesco; Hsia, Tain-Yen; Pennati, Giancarlo; Taylor, Andrew; Giardini, Alessandro; Khambadkone, Sachin; Schievano, Silvia; de Leval, Marc; Hsia, T-Y; Bove, Edward; Dorfman, Adam; Baker, G Hamilton; Hlavacek, Anthony; Migliavacca, Francesco; Pennati, Giancarlo; Dubini, Gabriele; Marsden, Alison; Feinstein, Jeffrey; Vignon-Clementel, Irene; Figliola, Richard; McGregor, John
2015-04-06
Complex congenital heart disease characterized by the underdevelopment of one ventricular chamber (single ventricle (SV) circulation) is normally treated with a three-stage surgical repair. This study aims at developing a multiscale computational framework able to couple a patient-specific three-dimensional finite-element model of the SV to a patient-specific lumped parameter (LP) model of the whole circulation, in a closed-loop fashion. A sequential approach was carried out: (i) cardiocirculatory parameters were estimated by using a fully LP model; (ii) ventricular material parameters and unloaded geometry were identified by means of the stand-alone, three-dimensional model of the SV; and (iii) the three-dimensional model of SV was coupled to the LP model of the circulation, thus closing the loop and creating a multiscale model. Once the patient-specific multiscale model was set using pre-operative clinical data, the virtual surgery was performed, and the post-operative conditions were simulated. This approach allows the analysis of local information on ventricular function as well as global parameters of the cardiovascular system. This methodology is generally applicable to patients suffering from SV disease for surgical planning at different stages of treatment. As an example, a clinical case from stage 1 to stage 2 is considered here.
Multiscale flat norm signatures for shapes and images
Sandine, Gary [Los Alamos National Laboratory; Morgan, Simon P [Los Alamos National Laboratory; Vixie, Kevin R [WASHINGTON STATE UNIV.; Clawson, Keth [WASHINGTON STATE UNIV.; Asaki, Thomas J [WASHINGTON STATE UNIV.; Price, Brandon [WALLA WALLA UNIV.
2009-01-01
In this paper we begin to explore the application of the multiscale flat norm introduced in Morgan and Vixie to shape and image analysis. In particular, we look at the use of the multiscale flat norm signature for the identification of shapes. After briefly reviewing the multiscale flat norm, the L{sup 1}TV functional and the relation between these two, we introduce multiscale signatures that naturally follow from the multiscale flat norm and its components. A numerical method based on the min-cut, max-flow graph-cut is briefly recalled. We suggest using L{sup 2} minimization, rather than the usual Crofton's formula based approximation, for choosing the required weights. The resulting weights have the dual benefits of being analytically computable and of giving more accurate approximations to the anisotropic TV energy. Finally, we demonstrate the usefulness of the signatures on simple shape classification tasks.
Efficient algorithms for multiscale modeling in porous media
Wheeler, Mary F.
2010-09-26
We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.
Fingerprint liveness detection using multiscale difference co-occurrence matrix
Yuan, Chengsheng; Xia, Zhihua; Sun, Xingming; Sun, Decai; Lv, Rui
2016-06-01
Fingerprint identification systems have been widely applied in both civilian and governmental applications due to its satisfying performance. However, the fingerprint identification systems can be easily cheated by the presentation of artificial fingerprints made from common materials. Therefore, it reduces the reliability and misleads the decision of the fingerprint identification systems. In this work, we propose a software-based fingerprint liveness detection method based on multiscale difference co-occurrence matrix (DCM). In doing so, multiscale wavelet transform operation is first conducted on the original image. After the preprocessing of the decomposition of the original image, DCMs are computed by using the Laplacian operator. Horizontal and vertical difference co-occurrence matrices are constructed in our method. In order to reduce the dimensionality of the feature vectors, truncation operation is introduced for DCMs. Then, the elements of processing DCMs are regarded as the texture features of original fingerprint images. Finally, classification accuracy of feature vectors is predicted based on a support vector machine classifier. The experimental results have shown that the performance of our method is very promising and meanwhile achieve better accurate classification compared with the best algorithms of LivDet2013 and LivDet2011, while being able to recognize spoofed fingerprints with better recognition accuracy.
Three-dimensional force model of the low-back for simple computer programming.
Tracy, M F
1990-08-01
A three-dimensional static model is described to evaluate the forces on low-back muscles and on the spine during manual handling tasks and other forceful activities. It is simple to use either with a calculator or programmed onto a micro-computer, whilst being more accurate than existing simple models. Comparisons are made with a more sophisticated model that requires mathematical libraries and programming skills. As predictions are similar, so is the area of validity: the proposed model's accuracy is good for light tasks but poorer for strenuous ones.
Yagüe-Fabra, J.A.; Ontiveros, S.; Jiménez, R.
2013-01-01
Many factors influence the measurement uncertainty when using computed tomography for dimensional metrology applications. One of the most critical steps is the surface extraction phase. An incorrect determination of the surface may significantly increase the measurement uncertainty. This paper...... presents an edge detection method for the surface extraction based on a 3D Canny algorithm with sub-voxel resolution. The advantages of this method are shown in comparison with the most commonly used technique nowadays, i.e. the local threshold definition. Both methods are applied to reference standards...
Three-dimensional computed tomography image based endovascular treatment for hepatic vein.
Ninomiya, Mizuki; Ikeda, Tetsuo; Shirabe, Ken; Kayashima, Hiroto; Harimoto, Norifumi; Iguchi, Tomohiro; Sugimachi, Keishi; Yamashita, Yo-Ichi; Ikegami, Toru; Saeki, Hiroshi; Oki, Eiji; Uchiyama, Hideaki; Yoshizumi, Tomoharu; Soejima, Yuji; Kawanaka, Hirofumi; Morita, Masaru; Maehara, Yoshihiko
2013-11-01
Along with the expansion of living donor liver transplantation, whereby hepatic venous anastomosis is mandatory, the frequency of hepatic venous stenosis that need interventional treatment is increasing. Due to its anatomical features, there are several pitfalls in the process of endovascular intervention for hepatic vein. Insufficient information of and around the hepatic vein may lead to miss-diagnosis of target lesion. Simulation by using three-dimensional computed tomography images was useful in planning the direction of X-ray projection and, as a consequence, contributed to safe endovascular treatment for hepatic venous stenosis.
Computer simulation of phase separation and ordering processes in low-dimensional systems
Mouritsen, O.G.; Shah, P.J.; Vitting Andersen, J.
1991-01-01
properties, and a possible universal classification of the late-stage dynamics. Evidence from kinetic lattice model calculations using computer-simulation techniques is presented in favor of a universal description of the dynamics in terms of algebraic growth laws with exponents which only depend...... on the nature of the conservation laws in effect. Atomic and molecular overlayers on solid surfaces and weakly-coupled atomic layers of certain three-dimensional crystals constitute a particularly suitable class of systems for studying fundamental aspects of ordering dynamics and phase separation in two...
Semidefinite characterization and computation of zero-dimensional real radical ideals
J. B. Lasserre; Laurent, Monique; Rostalski, P.
2008-01-01
For an ideal I⊆ℝ[x] given by a set of generators, a new semidefinite characterization of its real radical I(V ℝ(I)) is presented, provided it is zero-dimensional (even if I is not). Moreover, we propose an algorithm using numerical linear algebra and semidefinite optimization techniques, to compute all (finitely many) points of the real variety V ℝ(I) as well as a set of generators of the real radical ideal. The latter is obtained in the form of a border or Gröbner basis. The algorithm is bas...
Computer aided process of dimensional distortion determination of bounded plaster sandmix Part II
Pawlak, M.; Z. Niedźwiedzki
2010-01-01
A computer program allowing calculation of dimensional changes of mould made of cristobalite-gypsum composition in process of its heat treatment and preparation for molten metal casting is presented in this paper. The composition of the mixture and casting temperature to obtain cast of predetermined dimensions can be calculated using presented software. The base for program elaboration were the results of dilatometric test of bounded plaster sandmix composed of half hydrate α-CaSO4·0,5H2O of ...
Computer aided process of dimensional distortion determination of bounded plaster sandmix Part II
M. Pawlak
2010-01-01
Full Text Available A computer program allowing calculation of dimensional changes of mould made of cristobalite-gypsum composition in process of its heat treatment and preparation for molten metal casting is presented in this paper. The composition of the mixture and casting temperature to obtain cast of predetermined dimensions can be calculated using presented software. The base for program elaboration were the results of dilatometric test of bounded plaster sandmix composed of half hydrate α-CaSO4·0,5H2O of various cristobalite ratio. Approximation was carried out in the range of temperatures 100÷700°C.
The Center for Multiscale Plasma Dynamics
Kevrekidis, Yannis G
2015-01-20
This final report describes research performed in Princeton University, led by Professor Yannis G. Kevrekidis, over a period of six years (August 1, 2014 to July 31, 2010, including a one-year, no-cost extension) as part of the Center for Multiscale Plasma Dynamics led by the University of Maryland. The work resulted in the development and implementation of several multiscale algorithms based on the equation-free approach pioneered by the PI, including its applications in plasma dynamics problems. These algoriithms include coarse projective integration and coarse stability/bifurcation computations. In the later stages of the work, new links were made between this multiscale, coarse-graining approach and advances in data mining/machine learning algorithms.
The center for multiscale plasma dynamics
Kevrekidis, Yannis G [Princeton Univ., Princeton, NJ (United States)
2015-01-20
This final report describes research performed in Princeton University, led by Professor Yannis G. Kevrekidis, over a period of six years (August 1, 2014 to July 31, 2010, including a one-year, no-cost extension) as part of the Center for Multiscale Plasma Dynamics led by the University of Maryland. The work resulted in the development and implementation of several multiscale algorithms based on the equation-free approach pioneered by the PI, including its applications in plasma dynamics problems. These algoriithms include coarse projective integration and coarse stability/bifurcation computations. In the later stages of the work, new links were made between this multiscale, coarse-graining approach and advances in data mining/machine learning algorithms.
Yates, Leslie A.
1993-01-01
The construction of interferograms, schlieren, and shadowgraphs from computed flowfield solutions permits one-to-one comparisons of computed and experimental results. A method of constructing these images from both ideal- and real-gas, two and three-dimensional computed flowfields is described. The computational grids can be structured or unstructured, and multiple grids are an option. Constructed images are shown for several types of computed flows including nozzle, wake, and reacting flows; comparisons to experimental images are also shown. In addition, th sensitivity of these images to errors in the flowfield solution is demonstrated, and the constructed images can be used to identify problem areas in the computations.
Computation and validation of two-dimensional PSF simulation based on physical optics
Tayabaly, K; Sironi, G; Canestrari, R; Lavagna, M; Pareschi, G
2016-01-01
The Point Spread Function (PSF) is a key figure of merit for specifying the angular resolution of optical systems and, as the demand for higher and higher angular resolution increases, the problem of surface finishing must be taken seriously even in optical telescopes. From the optical design of the instrument, reliable ray-tracing routines allow computing and display of the PSF based on geometrical optics. However, such an approach does not directly account for the scattering caused by surface microroughness, which is interferential in nature. Although the scattering effect can be separately modeled, its inclusion in the ray-tracing routine requires assumptions that are difficult to verify. In that context, a purely physical optics approach is more appropriate as it remains valid regardless of the shape and size of the defects appearing on the optical surface. Such a computation, when performed in two-dimensional consideration, is memory and time consuming because it requires one to process a surface map wit...
Chen, L H; Chen, W H
1999-01-01
The purpose of this study was to use a 3-dimensional (3D) computer-aided design (CAD) simulation system to plan surgical procedures and predict postoperative changes in orthognathic surgery patients. A computer-generated imaging model was developed by combining a 3D reconstructed cephalometric skeletal image and a laser-scanned facial surface image. Moreover, postoperative data were studied and linked to the simulator model for programming and executing simulated surgical procedures. Interactive editing capabilities allow surgeons to operate CAD surgical simulation, and predicted results can be presented graphically and numerically. The results indicate that the integration of 3D images and CAD techniques have a potential for simulating surgery and providing graphic information to patients in obtaining an informed consent.
Airflow in a Multiscale Subject-Specific Breathing Human Lung Model
Choi, Jiwoong; Hoffman, Eric A; Tawhai, Merryn H; Lin, Ching-Long
2013-01-01
The airflow in a subject-specific breathing human lung is simulated with a multiscale computational fluid dynamics (CFD) lung model. The three-dimensional (3D) airway geometry beginning from the mouth to about 7 generations of airways is reconstructed from the multi-detector row computed tomography (MDCT) image at the total lung capacity (TLC). Along with the segmented lobe surfaces, we can build an anatomically-consistent one-dimensional (1D) airway tree spanning over more than 20 generations down to the terminal bronchioles, which is specific to the CT resolved airways and lobes (J Biomech 43(11): 2159-2163, 2010). We then register two lung images at TLC and the functional residual capacity (FRC) to specify subject-specific CFD flow boundary conditions and deform the airway surface mesh for a breathing lung simulation (J Comput Phys 244:168-192, 2013). The 1D airway tree bridges the 3D CT-resolved airways and the registration-derived regional ventilation in the lung parenchyma, thus a multiscale model. Larg...
Tao Sun
Full Text Available In vivo and in vitro studies give a paradoxical picture of the actions of the key regulatory factor TGF-beta1 in epidermal wound healing with it stimulating migration of keratinocytes but also inhibiting their proliferation. To try to reconcile these into an easily visualized 3D model of wound healing amenable for experimentation by cell biologists, a multiscale model of the formation of a 3D skin epithelium was established with TGF-beta1 literature-derived rule sets and equations embedded within it. At the cellular level, an agent-based bottom-up model that focuses on individual interacting units (keratinocytes was used. This was based on literature-derived rules governing keratinocyte behavior and keratinocyte/ECM interactions. The selection of these rule sets is described in detail in this paper. The agent-based model was then linked with a subcellular model of TGF-beta1 production and its action on keratinocytes simulated with a complex pathway simulator. This multiscale model can be run at a cellular level only or at a combined cellular/subcellular level. It was then initially challenged (by wounding to investigate the behavior of keratinocytes in wound healing at the cellular level. To investigate the possible actions of TGF-beta1, several hypotheses were then explored by deliberately manipulating some of these rule sets at subcellular levels. This exercise readily eliminated some hypotheses and identified a sequence of spatial-temporal actions of TGF-beta1 for normal successful wound healing in an easy-to-follow 3D model. We suggest this multiscale model offers a valuable, easy-to-visualize aid to our understanding of the actions of this key regulator in wound healing, and provides a model that can now be used to explore pathologies of wound healing.
Peridynamic Multiscale Finite Element Methods
Costa, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-12-01
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the
Distortion of three-dimensional computed tomography. Effect of the reconstruction function
Tachibana, Masayuki; Senjyu, Tatsunori; Baba, Hitoshi [Kyushu Univ., Fukuoka (Japan). Hospital
2001-09-01
Three-dimensional computed tomography is distorted in the Z-direction because of the partial volume effect. This distortion depends on slice width, helical scanning pitch, interpolation, reconstruction function, and reconstruction interval. We studied the effect of the reconstruction function on distortion by scanning high- and low-contrast phantoms in which an acrylic ball was set on expandable polystyrene in Japanese isinglass. We found that, if the acrylic ball was larger than the slice width, a three-dimensional image could be reconstructed without distortion by setting the threshold between the acrylic ball's CT numbers and the CT numbers of the acrylic ball's surroundings (air or Japanese isinglass). The image was not dependent on the reconstruction function. However, the three-dimensional size of the acrylic ball was smaller than the ball's actual size. Further, with the reconstruction function in which the edge was emphasized, distortion and the change caused by the threshold were larger than with the normal reconstruction function. This was because the style of the profile curve on the axial-transverse direction (X-Y direction) was affected by emphasizing the edge. (author)
High Performance Computing of Three-Dimensional Finite Element Codes on a 64-bit Machine
M.P Raju
2012-01-01
Full Text Available Three dimensional Navier-Stokes finite element formulations require huge computational power in terms of memory and CPU time. Recent developments in sparse direct solvers have significantly reduced the memory and computational time of direct solution methods. The objective of this study is twofold. First is to evaluate the performance of various state-of-the-art sequential sparse direct solvers in the context of finite element formulation of fluid flow problems. Second is to examine the merit in upgrading from 32 bit machine to a 64 bit machine with larger RAM capacity in terms of its capacity to solve larger problems. The choice of a direct solver is dependent on its computational time and its in-core memory requirements. Here four different solvers, UMFPACK, MUMPS, HSL_MA78 and PARDISO are compared. The performances of these solvers with respect to the computational time and memory requirements on a 64-bit windows server machine with 16GB RAM is evaluated.
Brennan, Douglas [University of Colorado School of Medicine, Aurora, Colorado (United States); Schubert, Leah; Diot, Quentin [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States); Castillo, Richard [Department of Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas (United States); Castillo, Edward; Guerrero, Thomas [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan (United States); Martel, Mary K. [Department of Radiation Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas (United States); Linderman, Derek; Gaspar, Laurie E.; Miften, Moyed; Kavanagh, Brian D. [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States); Vinogradskiy, Yevgeniy, E-mail: yevgeniy.vinogradskiy@ucdenver.edu [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States)
2015-06-01
Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients with pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT
Synchrotron X-ray computed laminography of the three-dimensional anatomy of tomato leaves.
Verboven, Pieter; Herremans, Els; Helfen, Lukas; Ho, Quang T; Abera, Metadel; Baumbach, Tilo; Wevers, Martine; Nicolaï, Bart M
2015-01-01
Synchrotron radiation computed laminography (SR-CL) is presented as an imaging method for analyzing the three-dimensional (3D) anatomy of leaves. The SR-CL method was used to provide 3D images of 1-mm² samples of intact leaves at a pixel resolution of 750 nm. The method allowed visualization and quantitative analysis of palisade and spongy mesophyll cells, and showed local venation patterns, aspects of xylem vascular structure and stomata. The method failed to image subcellular organelles such as chloroplasts. We constructed 3D computer models of leaves that can provide a basis for calculating gas exchange, light penetration and water and solute transport. The leaf anatomy of two different tomato genotypes grown in saturating light conditions was compared by 3D analysis. Differences were found in calculated values of tissue porosity, cell number density, cell area to volume ratio and cell volume and cell shape distributions of palisade and spongy cell layers. In contrast, the exposed cell area to leaf area ratio in mesophyll, a descriptor that correlates to the maximum rate of photosynthesis in saturated light conditions, was no different between spongy and palisade cells or between genotypes. The use of 3D image processing avoids many of the limitations of anatomical analysis with two-dimensional sections.
Computer-Assisted Reconstruction and Motion Analysis of the Three-Dimensional Cell
David R. Soll
2003-01-01
Full Text Available Even though several microscopic techniques provide three-dimensional (3D information on fixed and living cells, the perception persists that cells are two-dimensional (2D. Cells are, in fact, 3D and their behavior, including the extension of pseudopods, includes an important 3D component. Although treating the cell as a 2D entity has proven effective in understanding how cells locomote, and in identifying defects in a variety of mutant and abnormal cells, there are cases in which 3D reconstruction and analysis are essential. Here, we describe advanced computer-assisted 3D reconstruction and motion analysis programs for both individual live, crawling cells and developing embryos. These systems (3D-DIAS, 3D-DIASemb can be used to reconstruct and motion analyze at short time intervals the nucleus and pseudopodia as well as the entire surface of a single migrating cell, or every cell and nucleus in a developing embryo. Because all images are converted to mathematical representations, a variety of motility and dynamic morphology parameters can be computed that have proven quite valuable in the identification of mutant behaviors. We also describe examples of mutant behaviors in Dictyostelium that were revealed through 3D analysis.
Superimposition of 3-dimensional cone-beam computed tomography models of growing patients
Cevidanes, Lucia H. C.; Heymann, Gavin; Cornelis, Marie A.; DeClerck, Hugo J.; Tulloch, J. F. Camilla
2009-01-01
Introduction The objective of this study was to evaluate a new method for superimposition of 3-dimensional (3D) models of growing subjects. Methods Cone-beam computed tomography scans were taken before and after Class III malocclusion orthopedic treatment with miniplates. Three observers independently constructed 18 3D virtual surface models from cone-beam computed tomography scans of 3 patients. Separate 3D models were constructed for soft-tissue, cranial base, maxillary, and mandibular surfaces. The anterior cranial fossa was used to register the 3D models of before and after treatment (about 1 year of follow-up). Results Three-dimensional overlays of superimposed models and 3D color-coded displacement maps allowed visual and quantitative assessment of growth and treatment changes. The range of interobserver errors for each anatomic region was 0.4 mm for the zygomatic process of maxilla, chin, condyles, posterior border of the rami, and lower border of the mandible, and 0.5 mm for the anterior maxilla soft-tissue upper lip. Conclusions Our results suggest that this method is a valid and reproducible assessment of treatment outcomes for growing subjects. This technique can be used to identify maxillary and mandibular positional changes and bone remodeling relative to the anterior cranial fossa. PMID:19577154
Olszewski, R; Zech, F; Cosnard, G; Nicolas, V; Macq, B; Reychler, H
2007-09-01
The development of three-dimensional (3D) cephalometric analysis is essential for the computer-assisted planning of orthognathic surgery. The aim of this study was to transform and adapt Delaire's two-dimensional cephalometric analysis into the third dimension; this transposition was then validated. The comparative advantage of using 3D computed tomography (CT) surface renderings over profile X-rays was analysed. Comparison was made of inter- and intra-observer reproducibility of the cephalometric measurements done on profile X-rays and on 3D CT surface renderings on the same 26 dry skulls. The accuracy was also tested of the measurements done on 3D CT surface renderings (ACRO 3D) in relation to those directly taken on dry skulls with the help of a 3D measuring instrument. Inter- and intra-observer reproducibility proved significantly superior (p3D CT method. There were no significant differences in the accuracy of measurements between the ACRO 3D software and the 3D measuring instrument. The ACRO 3D software was confirmed as being a reliable tool for developing 3D CT cephalometric analyses. Further research may entail clinical validation of the 3D CT craniofacial cephalometric method of analysis.
Betin, A Yu; Bobrinev, V I; Verenikina, N M; Donchenko, S S; Odinokov, S B [Research Institute ' Radiotronics and Laser Engineering' , Bauman Moscow State Technical University, Moscow (Russian Federation); Evtikhiev, N N; Zlokazov, E Yu; Starikov, S N; Starikov, R S [National Reseach Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow (Russian Federation)
2015-08-31
A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)
Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D
2015-01-01
One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve
A spectral formalism for computing three-dimensional deformations due to surface loads. 1: Theory
Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.
1994-01-01
We outline a complete spectral formalism for computing high spatial resolution three-dimensional deformations arising from the surface mass loading of a spherically symmetric planet. The main advantages of the formalism are that all surface mass loads are always described using a consistent mathematical representation and that calculations of deformation fields for various spatial resolutions can be performed by simpley altering the spherical harmonic degree truncation level of the procedure. The latter may be important when incorporating improved observational constraints on a particular surface mass load, when considering potential errors in the computed field associated with mass loading having a spatial scale unresolved by the observational constraints, or when treating a number of global surface mass loads constrained with different spatial resolutions. The advantages do not extend to traditional 'Green's function' approaches which involve surface element discretizations of the global mass loads. Another advantage of the spectral formalism, over the Green's function approach, is that a posteriori analyses of the computed deformation fields are easily performed. In developing the spectral formalism, we consider specific cases where the Earth's mantle is assumed to respond as an elastic, slightly anelastic, or linear viscoelastic medium. In the case of an elastic or slightly anelastic mantle rheology the spectral response equations incorporate frequency dependent Love numbers. The formalism can therefore be used, for example, to compute the potentially resonant deformational response associated with the free core nutation and Chandler wobble eigenfunctions. For completeness, the spectral response equations include both body forces, as arise from the gravitational attraction of the Sun and the Moon, and surface mass loads. In either case, and for both elastic and anelastic mantle rheologies, we outline a pseudo-spectral technique for computing the ocean
Sathya Kumar Devireddy
2014-01-01
Full Text Available Objective: The aim was to assess the accuracy of three-dimensional anatomical reductions achieved by open method of treatment in cases of displaced unilateral mandibular subcondylar fractures using preoperative (pre op and postoperative (post op computed tomography (CT scans. Materials and Methods: In this prospective study, 10 patients with unilateral sub condylar fractures confirmed by an orthopantomogram were included. A pre op and post op CT after 1 week of surgical procedure was taken in axial, coronal and sagittal plane along with three-dimensional reconstruction. Standard anatomical parameters, which undergo changes due to fractures of the mandibular condyle were measured in pre and post op CT scans in three planes and statistically analysed for the accuracy of the reduction comparing the following variables: (a Pre op fractured and nonfractured side (b post op fractured and nonfractured side (c pre op fractured and post op fractured side. P < 0.05 was considered as significant. Results: Three-dimensional anatomical reduction was possible in 9 out of 10 cases (90%. The statistical analysis of each parameter in three variables revealed (P < 0.05 that there was a gross change in the dimensions of the parameters obtained in pre op fractured and nonfractured side. When these parameters were assessed in post op CT for the three variables there was no statistical difference between the post op fractured side and non fractured side. The same parameters were analysed for the three variables in pre op fractured and post op fractured side and found significant statistical difference suggesting a considerable change in the dimensions of the fractured side post operatively. Conclusion: The statistical and clinical results in our study emphasised that it is possible to fix the condyle in three-dimensional anatomical positions with open method of treatment and avoid post op degenerative joint changes. CT is the ideal imaging tool and should be used on
1989-01-20
mflC FILE. OOR SA/TR-2/89 A003: FINAL REPORT COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION CD...J., Ullman, J., The Design and Analysis of Computer Algorithms , Addison-Wesley Publishing Company, 1974. [A2] Anderson, B., Moore, J., Optimal...actual data. DC- 17 I I I I [All Aho, A., Hopcroft, J., Ullman, J., The Design and Analysis of Computer Algorithms , Addison-Wesley Publishing Company
Kinoshita, Shunichi; Eder, Wolfgang; Woeger, Julia; Hohenegger, Johann; Briguglio, Antonino; Ferrandez-Canadell, Carles
2015-04-01
Symbiont-bearing larger benthic Foraminifera (LBF) are long-living marine (at least 1 year), single-celled organisms with complex calcium carbonate shells. Their morphology has been intensively studied since the middle of the nineteenth century. This led to a broad spectrum of taxonomic results, important from biostratigraphy to ecology in shallow water tropical to warm temperate marine palaeo-environments. However, it was necessary for the traditional investigation methods to cut or destruct specimens for analysing the taxonomically important inner structures. X-ray micro-computed tomography (microCT) is one of the newest techniques used in morphological studies. The greatest advantage is the non-destructive acquisition of inner structures. Furthermore, the running improve of microCT scanners' hard- and software provides high resolution and short time scans well-suited for LBF. Three-dimensional imaging techniques allow to select and extract each chamber and to measure easily its volume, surface and several form parameters used for morphometric analyses. Thus, 3-dimensional visualisation of LBF-tests is a very big step forward from traditional morphology based on 2-dimensional data. The quantification of chamber form is a great opportunity to tackle LBF structures, architectures and the bauplan geometry. The micrometric digital resolution is the only way to solve many controversies in phylogeny and evolutionary trends of LBF. For the present study we used micro-computed tomography to easily investigate the chamber number of every specimen from statistically representative part of populations to estimate population dynamics. Samples of living individuals are collected at monthly intervals from fixed locations. Specific preparation allows to scan up to 35 specimens per scan within 2 hours and to obtain the complete digital dataset for each specimen of the population. MicroCT enables thus a fast and precise count of all chambers built by the foraminifer from its
Three-dimensional computed tomography analysis of non-osteoarthritic adult acetabular dysplasia
Ito, Hiroshi; Matsuno, Takeo; Hirayama, Teruhisa; Tanino, Hiromasa; Yamanaka, Yasuhiro [Asahikawa Medical College, Department of Orthopaedic Surgery, Asahikawa (Japan); Minami, Akio [Hokkaido University School of Medicine, Department of Orthopaedic Surgery, Sapporo (Japan)
2009-02-15
Little data exists on the original morphology of acetabular dysplasia obtained from patients without radiographic advanced osteoarthritic changes. The aim of this study was to investigate the distribution and degree of acetabular dysplasia in a large number of patients showing no advanced degenerative changes using three-dimensional computed tomography (3DCT). Eighty-four dysplastic hips in 55 consecutive patients were studied. All 84 hips were in pre- or early osteoarthritis without radiographic evidence of joint space narrowing, formation of osteophytes or cysts, or deformity of femoral heads. The mean age at the time of CT scan was 35 years (range 15-64 years). 3D images were reconstructed and analyzed using recent computer imaging software (INTAGE Realia and Volume Player). Deficiency types and degrees of acetabular dysplasia were precisely evaluated using these computer software. The average Harris hip score at CT scans was 82 points. Twenty-two hips (26%) were classified as anterior deficiency, 17 hips (20%) as posterior deficiency, and 45 hips (54%) as lateral deficiency. No significant difference was found in the Harris hip score among these groups. The analysis of various measurements indicated wide variations. There was a significant correlation between the Harris hip score and the acetabular coverage (p < 0.001). Our results indicated wide variety of deficiency type and degree of acetabular dysplasia. Hips with greater acetabular coverage tended to have a higher Harris hip score. (orig.)
Bryan, J.B.; Burton, D.E.; Cunningham, M.E.; Lettis, L.A. Jr.
1978-06-01
A computational approach used for subsurface explosion cratering was extended to hypervelocity impact cratering. Meteor (Barringer) Crater, Arizona, was selected for the first computer simulation because it is one of the most thoroughly studied craters. It is also an excellent example of a simple, bowl-shaped crater and is one of the youngest terrestrial impact craters. Initial conditions for this calculation included a meteorite impact velocity of 15 km/s, meteorite mass of 1.67 x 10/sup 8/ kg, with a corresponding kinetic energy of 1.88 x 10/sup 16/ J (4.5 megatons). A two-dimensional Eulerian finite difference code called SOIL was used for this simulation of a cylindrical iron projectile impacting at normal incidence into a limestone target. For this initial calculation, a Tillotson equation-of-state description for iron and limestone was used with no shear strength. Results obtained for this preliminary calculation of the formation of Meteor Crater are in good agreement with field measurements. A color movie based on this calculation was produced using computer-generated graphics. 19 figures, 5 tables, 63 references.
Liu, P.; Zhang, Y.
2008-07-01
Accurately simulating secondary organic aerosols (SOA) in three-dimensional (3-D) air quality models is challenging due to the complexity of the physics and chemistry involved and the high computational demand required. A computationally-efficient yet accurate SOA module is necessary in 3-D applications for long-term simulations and real-time air quality forecasting. A coupled gas and aerosol box model (i.e., 0-D CMAQ-MADRID 2) is used to optimize relevant processes in order to develop such a SOA module. Solving the partitioning equations for condensable volatile organic compounds (VOCs) and calculating their activity coefficients in the multicomponent mixtures are identified to be the most computationally-expensive processes. The two processes can be speeded up by relaxing the error tolerance levels and reducing the maximum number of iterations of the numerical solver for the partitioning equations for organic species; conditionally activating organic-inorganic interactions; and parameterizing the calculation of activity coefficients for organic mixtures in the hydrophilic module. The optimal speed-up method can reduce the total CPU cost by up to a factor of 31.4 from benchmark under the rural conditions with 2 ppb isoprene and by factors of 10 71 under various test conditions with 2 10 ppb isoprene and >40% relative humidity while maintaining ±15% deviation. These speed-up methods are applicable to other SOA modules that are based on partitioning theories.
Humeau-Heurtier, Anne; Baumert, Mathias; Mahé, Guillaume; Abraham, Pierre
2014-01-01
.... This is performed through the computation of their multiscale compression entropy. The results obtained with LSCI time series computed from different regions of interest (ROI) sizes are examined...
Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket
Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.
2016-10-01
Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
Lu, Wei; Song, Joo Hyun; Christensen, Gary E.; Parikh, Parag J.; Bradley, Jeffrey D.; Low, Daniel A.
2006-03-01
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Metz, P. D.
A FORTRAN computer program called GROCS (GRound Coupled Systems) has been developed to study 3-dimensional underground heat flow. Features include the use of up to 30 finite elements or blocks of Earth which interact via finite difference heat flow equations and a subprogram which sets realistic time and depth dependent boundary conditions. No explicit consideration of mositure movement or freezing is given. GROCS has been used to model the thermal behavior of buried solar heat storage tanks (with and without insulation) and serpentine pipe fields for solar heat pump space conditioning systems. The program is available independently or in a form compatible with specially written TRNSYS component TYPE subroutines. The approach taken in the design of GROCS, the mathematics contained and the program architecture, are described. Then, the operation of the stand-alone version is explained. Finally, the validity of GROCS is discussed.
Wee, Loo Kang
2012-01-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In the field of designing computer simulations, we discuss briefly three pedagogical considerations such as 1) consistent simulation world view with pen paper representation, 2) data table, scientific graphs and symbolic mathematical representations for ease of data collection and multiple representational visualizations and 3) game for simple concept testing that can further support learning. We also suggest using physical world setup to be augmented complimentarily with simulation while highlighting three advantages of real collision carts equipment like tacit 3D experience, random errors in measurement and conceptual significance of conservation of momentum applied to just before and after collision. General feedback from the students has been relatively positive,...
Hudritsch, W.W.; Smith, P.D.
1977-11-01
The one-dimensional computer program PADLOC is designed to analyze steady-state and time-dependent plateout of fission products in an arbitrary network of pipes. The problem solved is one of mass transport of impurities in a fluid, including the effects of sources in the fluid and in the plateout surfaces, convection along the flow paths, decay, adsorption on surfaces (plateout), and desorption from surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by (a) linearizing the equations about an approximate solution, employing a Newton Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an arbitrary flow network.
A computational study of the electronic properties of one-dimensional armchair phosphorene nanotubes
Yu, Sheng; Zhu, Hao; Eshun, Kwesi; Arab, Abbas; Badwan, Ahmad; Li, Qiliang [Department of Electrical and Computer Engineering, George Mason University, Fairfax, Virginia 22033 (United States)
2015-10-28
We have performed a comprehensive first-principle computational study of the electronic properties of one-dimensional phosphorene nanotubes (PNTs), and the strain effect on the mechanical and electrical properties of PNTs, including the elastic modulus, energy bandstructure, and carrier effective mass. The study has demonstrated that the armchair PNTs have semiconducting properties along the axial direction and the carrier mobility can be significantly improved by compressive strain. The hole mobility increases from 40.7 cm{sup 2}/V s to 197.0 cm{sup 2}/V s as the compressive strain increases to −5% at room temperature. The investigations of size effect on armchair PNTs indicated that the conductance increases significantly as the increasing diameter. Overall, this study indicated that the PNTs have very attractive electronic properties for future application in nanomaterials and devices.
Kim, Ha Youn; Park, Sung Tae; Bae, Won Kyoung; Goo, Dong Erk [Dept. of Radiology, Soonchunhyang University Hospital, Seoul (Korea, Republic of)
2014-12-15
We studied the influence of proximal geometry on the results of computational fluid dynamics (CFD). We made five models of different proximal geometry from three dimensional angiography of 63-year-old women with intracranial aneurysm. CFD results were analyzed as peak systolic velocity (PSV) at inlet and outlet as well as flow velocity profile at proximal level of internal carotid artery (ICA) aneurysm. Modified model of cavernous one with proximal tubing showed faster PSV at outlet than that at inlet. The PSV of outlets of other models were slower than that of inlets. The flow velocity profiles at immediate proximal to ICA aneurysm showed similar patterns in all models, suggesting that proximal vessel geometries could affect CFD results.
Simulation of radiation effects on three-dimensional computer optical memories
Moscovitch, M.; Emfietzoglou, D.
1997-01-01
A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle's track axis may be lost. The magnitude of the effect is dependent on the particle's track structure.
Value of three-dimensional computed tomography in screening cerebral aneurysms
Yamaguchi, Tamaki; Sugiura, Yusuke; Suzuki, Atsushi; Yamagata, Yoshitaka [Hyogo Medical Coll. (Japan)
1997-10-01
We performed three-dimensional computed tomography (3D-CT) in 6 patients of cerebral aneurysm. Prior cerebral angiography showed a total of 17 aneurysms. 3D-CT alone detected 10 cerebral aneurysm (59%). It was possible to identify aneurysms larger than 10 mm even when located near the circle of Willis. It was difficult to identify aneurysms when smaller than 7 mm regardless of their location. 3D-CT was of limited value in detecting cerebral aneurysms, particularly when located near the circle of Willis with complex vascular network. As cases of oculomotor palsy may be caused by lesions other than cerebral aneurysm, we advocate that 3D-CT be performed after magnetic resonance imaging (MRI) in screening cases of suspected cerebral aneurysm. (author)
Computer systems for three-dimensional diagnostic imaging: an examination of the state of the art.
Stytz, M R; Frieder, O
1991-01-01
This survey reviews three-dimensional (3D) medical imaging machines and 3D medical imaging operations. The survey is designed to provide a snapshot overview of the present state of computer architectures for 3D medical imaging. The basic volume manipulation, object segmentation, and graphics operations required of a 3D medical imaging machine are described and sample algorithms are presented. The architecture and 3D imaging algorithms employed in 11 machines which render medical images are assessed. The performance of the machines is compared across several dimensions, including image resolution, elapsed time to form an image, imaging algorithms employed in the machine, and the degree of parallelism employed in the architecture. The innovation in each machine, whether architectural or algorithmic, is described in detail. General trends for future developments in this field are delineated and an extensive bibliography is provided.
Hudritsch, W.W.; Smith, P.D.
1977-11-01
The one-dimensional computer program PADLOC is designed to analyze steady-state and time-dependent plateout of fission products in an arbitrary network of pipes. The problem solved is one of mass transport of impurities in a fluid, including the effects of sources in the fluid and in the plateout surfaces, convection along the flow paths, decay, adsorption on surfaces (plateout), and desorption from surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by (a) linearizing the equations about an approximate solution, employing a Newton Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an arbitrary flow network.
The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education
Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-01-01
Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759
Bontá, Hernán; Galli, Federico G; Caride, Facundo; Carranza, Nelson
2012-01-01
The aim of this study is to present a predictable method for evaluating dimensional changes in the alveolar ridge through cone beam computed tomography (CT). Twenty subjects with single-rooted tooth extraction indication were selected for this preliminary report, which is part of a larger ongoing investigation. After extraction, two CT scans were performed; the first within 24 hours post-extraction (TC1) and the second 6 months (TC2) later. A radiographic guide with a radiopaque element placed along the tooth axis was developed to locate the same plane of reference in two different CT scans. For each patient, backtrack analysis was performed in order to establish the reproducibility error of a predetermined point in space between two CT scans. Briefly, an anatomical landmark was selected and its coordinates to the radiopaque marker were recorded. One week later, the coordinates were followed backwards in the same CT scan to obtain the position where the reference point should be located. A similar process was carried out between two different CT scans taken 6 months apart. The distance between the anatomical reference and the obtained point of position was calculated to establish the accuracy of the method. Additionally, a novel method for evaluating dimensional changes of the alveolus after tooth extraction is presented. The backtrack analysis determined an average within-examiner discrepancy between both measurements from the same CT scan of 0.19 mm. SD +/- 0.05. With the method presented herein, a reference point in a CT scan can be accurately backtracked and located in a second CT scan taken six months later. Taken together they open the possibility of calculating dimensional changes that occur in the alveolar ridge over time, such as post-extraction alveolar resorption, or the bone volume gained after different augmentation procedures.
Duan, Wenbo; Kirby, Ray; Mudge, Peter; Gan, Tat-Hean
2016-12-01
Ultrasonic guided waves are often used in the detection of defects in oil and gas pipelines. It is common for these pipelines to be buried underground and this may restrict the length of the pipe that can be successfully tested. This is because acoustic energy travelling along the pipe walls may radiate out into the surrounding medium. Accordingly, it is important to develop a better understanding of the way in which elastic waves propagate along the walls of buried pipes, and so in this article a numerical model is developed that is suitable for computing the eigenmodes for uncoated and coated buried pipes. This is achieved by combining a one dimensional eigensolution based on the semi-analytic finite element (SAFE) method, with a perfectly matched layer (PML) for the infinite medium surrounding the pipe. This article also explores an alternative exponential complex coordinate stretching function for the PML in order to improve solution convergence. It is shown for buried pipelines that accurate solutions may be obtained over the entire frequency range typically used in long range ultrasonic testing (LRUT) using a PML layer with a thickness equal to the pipe wall thickness. This delivers a fast and computationally efficient method and it is shown for pipes buried in sand or soil that relevant eigenmodes can be computed and sorted in less than one second using relatively modest computer hardware. The method is also used to find eigenmodes for a buried pipe coated with the viscoelastic material bitumen. It was recently observed in the literature that a viscoelastic coating may effectively isolate particular eigenmodes so that energy does not radiate from these modes into the surrounding [elastic] medium. A similar effect is also observed in this article and it is shown that this occurs even for a relatively thin layer of bitumen, and when the shear impedance of the coating material is larger than that of the surrounding medium.
Ozgul Mehmet
2012-08-01
Full Text Available Abstract Background Multidetector computed tomography (MDCT provides guidance for primary screening of the central airways. The aim of our study was assessing the contribution of multidetector computed tomography- two dimensional reconstruction in the management of patients with tracheobronchial stenosis prior to the procedure and during a short follow up period of 3 months after the endobronchial treatment. Methods This is a retrospective study with data collected from an electronic database and from the medical records. Patients evaluated with MDCT and who had undergone a stenting procedure were included. A Philips RSGDT 07605 model MDCT was used, and slice thickness, 3 mm; overlap, 1.5 mm; matrix, 512x512; mass, 90 and kV, 120 were evaluated. The diameters of the airways 10 mm proximal and 10 mm distal to the obstruction were measured and the stent diameter (D was determined from the average between D upper and D lower. Results Fifty-six patients, 14 (25% women and 42 (75% men, mean age 55.3 ± 13.2 years (range: 16-79 years, were assessed by MDCT and then treated with placement of an endobronchial stent. A computed tomography review was made with 6 detector Philips RSGDT 07605 multidetector computed tomography device. Endobronchial therapy was provided for the patients with endoluminal lesions. Stents were placed into the area of stenosis in patients with external compression after dilatation and debulking procedures had been carried out. In one patient the migration of a stent was detected during the follow up period by using MDCT. Conclusions MDCT helps to define stent size, length and type in patients who are suitable for endobronchial stinting. This is a non-invasive, reliable method that helps decisions about optimal stent size and position, thus reducing complications.
Computational Aerodynamic Analysis of Three-Dimensional Ice Shapes on a NACA 23012 Airfoil
Jun, GaRam; Oliden, Daniel; Potapczuk, Mark G.; Tsao, Jen-Ching
2014-01-01
The present study identifies a process for performing computational fluid dynamic calculations of the flow over full three-dimensional (3D) representations of complex ice shapes deposited on aircraft surfaces. Rime and glaze icing geometries formed on a NACA23012 airfoil were obtained during testing in the NASA Glenn Research Centers Icing Research Tunnel (IRT). The ice shape geometries were scanned as a cloud of data points using a 3D laser scanner. The data point clouds were meshed using Geomagic software to create highly accurate models of the ice surface. The surface data was imported into Pointwise grid generation software to create the CFD surface and volume grids. It was determined that generating grids in Pointwise for complex 3D icing geometries was possible using various techniques that depended on the ice shape. Computations of the flow fields over these ice shapes were performed using the NASA National Combustion Code (NCC). Results for a rime ice shape for angle of attack conditions ranging from 0 to 10 degrees and for freestream Mach numbers of 0.10 and 0.18 are presented. For validation of the computational results, comparisons were made to test results from rapid-prototype models of the selected ice accretion shapes, obtained from a separate study in a subsonic wind tunnel at the University of Illinois at Urbana-Champaign. The computational and experimental results were compared for values of pressure coefficient and lift. Initial results show fairly good agreement for rime ice accretion simulations across the range of conditions examined. The glaze ice results are promising but require some further examination.
Contrast-enhanced three-dimensional computed tomography of brain tumors
Kuroiwa, Toshihiko; Deguchi, Jun; Arai, Motohiro; Tanaka, Hideo; Ohta, Tomio; Narabayashi, Isamu [Osaka Medical Coll., Takatsuki (Japan)
1997-12-01
To evaluate the usefulness of contrast-enhanced spiral (helical) scanning computed tomography (CT) in patients with various brain tumors, a non ionic contrast medium was injected intravenously in ten patients with meningioma, five with vestibular schwannoma, and five with pituitary adenoma. Images were taken by spiral scan at an X-ray beam width of 1 or 2 mm. The volume data obtained were combined at 0.5-1 mm intervals for the three-dimensional (3-D) image reconstruction, by the volume rendering method. Each image was separated by CT number into bone, blood vessel, contrast-enhanced tumor, and cerebral parenchyma. In some subjects, a pair of images was reconstructed to allow stereoscopic viewing at a parallax angle of 6 degrees. Three-dimensional relationship between tumors and other structures was easily understood, permitting pre-operative prediction of the operative field and also a view of the area after tumor excision. The present method surpassed conventional CT techniques in terms of clarity of the 3-D relationship, and surpassed MRI and MRA in terms of clarity of relationship between the tumor and skull. These results confirm that this method appears to be applicable in routine clinical situations with minimal invasiveness, high degree of safety, and short examination time. (author)
Computer-based training in two-dimensional echocardiography using an echocardiography simulator.
Weidenbach, Michael; Wild, Florentine; Scheer, Kathrin; Muth, Gerhard; Kreutter, Stefan; Grunst, Gernoth; Berlage, Thomas; Schneider, Peter
2005-04-01
Two-dimensional (2D) echocardiography is a user-dependent technique that poses some inherent problems to the beginner. The first problem for beginners is spatial orientation, especially the orientation of the scan plane in reference to the 3-dimensional (3D) geometry of the heart. The second problem for beginners is steering of the ultrasound probe. We have designed a simulator to teach these skills. On a computer screen a side-by-side presentation of a 3D virtual reality scene on the right side and a 2D echocardiographic view on the left side is given. The virtual scene consists of a 3D heart and an ultrasound probe with scan plane. The 2D echocardiographic image is calculated from 3D echocardiographic data sets that are registered with the heart model to achieve spatial and temporal congruency. The displayed 2D echocardiographic image is defined and controlled by the orientation of the virtual scan plane. To teach hand-eye coordination we equipped a dummy transducer with a 3D tracking system and placed it on a dummy torso. We have evaluated the usability of the simulator in an introductory course for final-year medical students. The simulator was graded realistic and easy to use. According to a subjective self-assessment by a standardized questionnaire the aforementioned skills were imparted effectively.
Lee, Seung Hun; Moon, Cheol Hyun; Im, Jeong Soo; Seo, Hwa Jeong [Graduate School of Public Health and Social Welfare, Gachon University of Medicine and Science, Incheon (Korea, Republic of)
2010-06-15
This study is aimed to evaluate the position of mandibular foramen of mandibular prognathism patients using 3-dimensional CT images in order to reduce the chance of an anesthetic failure of the mandibular nerve and to prevent the damage to the inferior alveolar nerve during the orthognathic surgery. The control group consist of 30 patients with class I occlusion. The experimental group consist of 44 patients with class III malocclusion. Three-dimensional computed tomography was used to evaluate the position of the mandibular foramina. The distance between mandibular plane and mandibular foramen, class I was 25.385 mm, class III was 23.628 mm. About the distance between occlusal plane and mandibular foramen, class I was 1.478 mm, class III was 5.144 mm. The distance between posterior border plan of mandibular ramus and mandibular foramen had not statistically significant. About the distance between sagittal plane of mandible and mandibular foramen did not also showed statistically significant. The result of this study could help the clinicians to apprehend more accurate anatomical locations of the foramina on the mandible with various facial skeletal types. Thereby to perform more accurate block anesthesia of the mandibular nerve and osteotomy with minimal nerve damage. In addition, this study could provide fundamental data for any related researches about the location of the mandibular foramina for other purposes.
Generalized multiscale finite element methods: Oversampling strategies
Efendiev, Yalchin R.
2014-01-01
In this paper, we propose oversampling strategies in the generalized multiscale finite element method (GMsFEM) framework. The GMsFEM, which has been recently introduced in Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], allows solving multiscale parameter-dependent problems at a reduced computational cost by constructing a reduced-order representation of the solution on a coarse grid. The main idea of the method consists of (1) the construction of snapshot space, (2) the construction of the offline space, and (3) construction of the online space (the latter for parameter-dependent problems). In Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], it was shown that the GMsFEM provides a flexible tool to solve multiscale problems with a complex input space by generating appropriate snapshot, offline, and online spaces. In this paper, we develop oversampling techniques to be used in this context (see Hou and Wu (1997) where oversampling is introduced for multiscale finite element methods). It is known (see Hou and Wu (1997)) that the oversampling can improve the accuracy of multiscale methods. In particular, the oversampling technique uses larger regions (larger than the target coarse block) in constructing local basis functions. Our motivation stems from the analysis presented in this paper, which shows that when using oversampling techniques in the construction of the snapshot space and offline space, GMsFEM will converge independent of small scales and high contrast under certain assumptions. We consider the use of a multiple eigenvalue problems to improve the convergence and discuss their relation to single spectral problems that use oversampled regions. The oversampling procedures proposed in this paper differ from those in Hou and Wu (1997). In particular, the oversampling domains are partially used in constructing local
Three-Dimensional Computed Tomography Analysis of the Posterior Tibial Slope in 100 Knees.
Ho, Jade Pei Yuik; Merican, Azhar M; Hashim, Muhammad Sufian; Abbas, Azlina A; Chan, Chee Ken; Mohamad, Jamal A
2017-10-01
The posterior tibial slope (PTS) is an important consideration in knee arthroplasty. However, there is still no consensus for the optimal slope. The objectives of this study were (1) to reliably determine the native PTS in this population using 3-dimensional computed tomography scans and (2) to determine the normal reference range for PTS in this population. One hundred computed tomography scans of disease-free knees were analyzed. A 3-dimensional reconstructed image of the tibia was generated and aligned to its anatomic axis in the coronal and sagittal planes. The tibia was then rotationally aligned to the tibial plateau (tibial centroid axis) and PTS was measured from best-fit planes on the surface of the proximal tibia and individually for the medial and lateral plateaus. This was then repeated with the tibia rotationally aligned to the ankle (transmalleolar axis). When rotationally aligned to the tibial plateau, the mean PTS, medial PTS, and lateral PTS were 11.2° ± 3.0 (range, 4.7°-17.7°), 11.3° ± 3.2 (range, 2.7°-19.7°), and 10.9° ± 3.7 (range, 3.5°-19.4°), respectively. When rotationally aligned to the ankle, the mean PTS, medial PTS, and lateral PTS were 11.4° ± 3.0 (range, 5.3°-19.3°), 13.9° ± 3.7 (range, 3.1°-24.4°), and 9.7° ± 3.6 (range, 0.8°-17.7°), respectively. The PTS in the normal Asian knee is on average 11° (mean) with a reference range of 5°-17° (mean ± 2 standard deviation). This has implications to surgery and implant design. Copyright © 2017 Elsevier Inc. All rights reserved.
Müller, Pavel; Hiller, Jochen; Cantatore, Angela
2012-01-01
Computed tomography has entered the industrial world in 1980’s as a technique for nondestructive testing and has nowadays become a revolutionary tool for dimensional metrology, suitable for actual/nominal comparison and verification of geometrical and dimensional tolerances. This paper evaluates...... measurement results using different measuring strategies applied in different inspection software packages for volume and surface data analysis. The strategy influence is determined by calculating the measurement uncertainty. This investigation includes measurements of two industrial items, an aluminium pipe...
Stephen Pankavich
2015-02-01
Full Text Available Many mesoscopic N-atom systems derive their structural and dynamical properties from processes coupled across multiple scales in space and time. That is, they simultaneously deform or display collective behaviors, while experiencing atomic scale vibrations and collisions. Due to the large number of atoms involved and the need to simulate over long time periods of biological interest, traditional computational tools, like molecular dynamics, are often infeasible for such systems. Hence, in the current review article, we present and discuss two recent multiscale methods, stemming from the N-atom formulation and an underlying scale separation, that can be used to study such systems in a friction-dominated regime: multiscale perturbation theory and multiscale factorization. These novel analytic foundations provide a self-consistent approach to yield accurate and feasible long-time simulations with atomic detail for a variety of multiscale phenomena, such as viral structural transitions and macromolecular self-assembly. As such, the accuracy and efficiency of the associated algorithms are demonstrated for a few representative biological systems, including satellite tobacco mosaic virus (STMV and lactoferrin.
Computing n-dimensional volumes of complexes: Application to constructive entropy bounds
Beiu, V.; Makaruk, H.E.
1997-11-01
The constructive bounds on the needed number-of-bits (entropy) for solving a dichotomy (i.e., classification of a given data-set into two distinct classes) can be represented by the quotient of two multidimensional solid volumes. Exact methods for the calculation of the volume of the solids lead to a tighter lower bound on the needed number-of-bits--than the ones previously known. Establishing such bounds is very important for engineering applications, as they can improve certain constructive neural learning algorithms, while also reducing the area of future VLSI implementations of neural networks. The paper will present an effective method for the exact calculation of the volume of any n-dimensional complex. The method uses a divide-and-conquer approach by: (i) partitioning (i.e., slicing) a complex into simplices; and (ii) computing the volumes of these simplices. The slicing of any complex into a sum of simplices always exists, but it is not unique. This non-uniqueness gives us the freedom to choose that specific partitioning which is convenient for a particular case. It will be shown that this optimal choice is related to the symmetries of the complex, and can significantly reduce the computations involved.
MARS: computing three-dimensional alignments for multiple ligands using pairwise similarities.
Klabunde, Thomas; Giegerich, Clemens; Evers, Andreas
2012-08-27
The three-dimensional (3D) superimposition of molecules of one biological target reflecting their relative bioactive orientation is key for several ligand-based drug design studies (e.g., QSAR studies, pharmacophore modeling). However, with the lack of sufficient ligand-protein complex structures, an experimental alignment is difficult or often impossible to obtain. Several computational 3D alignment tools have been developed by academic or commercial groups to address this challenge. Here, we present a new approach, MARS (Multiple Alignments by ROCS-based Similarity), that is based on the pairwise alignment of all molecules within the data set using the tool ROCS (Rapid Overlay of Chemical Structures). Each pairwise alignment is scored, and the results are captured in a score matrix. The ideal superimposition of the compounds in the set is then identified by the analysis of the score matrix building stepwise a superimposition of all molecules. The algorithm exploits similarities among all molecules in the data set to compute an optimal 3D alignment. This alignment tool presented here can be used for several applications, including pharmacophore model generation, 3D QSAR modeling, 3D clustering, identification of structural outliers, and addition of compounds to an already existing alignment. Case studies are shown, validating the 3D alignments for six different data sets.
Three-dimensional integration of nanotechnologies for computing and data storage on a single chip
Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish
2017-07-01
The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.
Field computation for two-dimensional array transducers with limited diffraction array beams.
Lu, Jian-Yu; Cheng, Jiqi
2005-10-01
A method is developed for calculating fields produced with a two-dimensional (2D) array transducer. This method decomposes an arbitrary 2D aperture weighting function into a set of limited diffraction array beams. Using the analytical expressions of limited diffraction beams, arbitrary continuous wave (cw) or pulse wave (pw) fields of 2D arrays can be obtained with a simple superposition of these beams. In addition, this method can be simplified and applied to a 1D array transducer of a finite or infinite elevation height. For beams produced with axially symmetric aperture weighting functions, this method can be reduced to the Fourier-Bessel method studied previously where an annular array transducer can be used. The advantage of the method is that it is accurate and computationally efficient, especially in regions that are not far from the surface of the transducer (near field), where it is important for medical imaging. Both computer simulations and a synthetic array experiment are carried out to verify the method. Results (Bessel beam, focused Gaussian beam, X wave and asymmetric array beams) show that the method is accurate as compared to that using the Rayleigh-Sommerfeld diffraction formula and agrees well with the experiment.
Multiscale information modelling for heart morphogenesis
Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.
2010-07-01
Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.
Towards multiscale modeling of influenza infection.
Murillo, Lisa N; Murillo, Michael S; Perelson, Alan S
2013-09-07
Aided by recent advances in computational power, algorithms, and higher fidelity data, increasingly detailed theoretical models of infection with influenza A virus are being developed. We review single scale models as they describe influenza infection from intracellular to global scales, and, in particular, we consider those models that capture details specific to influenza and can be used to link different scales. We discuss the few multiscale models of influenza infection that have been developed in this emerging field. In addition to discussing modeling approaches, we also survey biological data on influenza infection and transmission that is relevant for constructing influenza infection models. We envision that, in the future, multiscale models that capitalize on technical advances in experimental biology and high performance computing could be used to describe the large spatial scale epidemiology of influenza infection, evolution of the virus, and transmission between hosts more accurately.
Multiscale Simulations Using Particles
Walther, Jens Honore
We are developing particle methods as a general framework for large scale simulations of discrete and continuous systems in science and engineering. The specific application and research areas include: discrete element simulations of granular flow, smoothed particle hydrodynamics and particle vor...... dynamics. Recent work on the thermophoretic motion of water nanodroplets confined inside carbon nanotubes, and multiscale techniques for polar liquids will be discussed in detail at the symposium....... vortex methods for problems in continuum fluid dynamics, dissipative particle dynamics for flow at the meso scale, and atomistic molecular dynamics simulations of nanofluidic systems. We employ multiscale techniques to breach the atomistic and continuum scales to study fundamental problems in fluid...
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Biggs, Matthew B; Papin, Jason A
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Matthew B Biggs
Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Analysis of the anatomy of the maxillary sinus septum using 3-dimensional computed tomography.
Park, Young-Bum; Jeon, Hwan-Su; Shim, June-Sung; Lee, Keun-Woo; Moon, Hong-Seok
2011-04-01
Maxillary posterior teeth exhibit a high incidence of periodontal bone and tooth loss. After tooth loss, the edentulous alveolar process of the posterior maxilla is often affected by resorption, which results in loss of vertical bone volume. Moreover, progressive sinus pneumatization leads to a decrease in the alveolar process from the cranial side. The sinus elevation and augmentation surgical technique opened a new way of anchoring endosseous implants despite discernible bone reduction. However, the surgical interventions require in-depth knowledge of maxillary sinus anatomy such as sinus septum and potential variations. The purpose of this study was to investigate the prevalence, location, height, morphology, and orientation of maxillary sinus septa by use of computed tomography (CT) and 3-dimensional imaging. Two hundred patients undergoing implant treatment at the Yonsei University College of Dentistry, Seoul, South Korea, were randomly selected for analysis of maxillary sinus septa. CT and DentaScan (GE Medical Systems, Milwaukee, WI)-reformatted data from 400 sinuses were analyzed with the Preview program (Infinitt, Seoul, South Korea). Three-dimensional images were rendered for measurement by use of the Accurex program (CyberMed, Seoul, South Korea). We found 111 septa in 400 maxillary sinuses (27.7%). This corresponded to 37% of the patients. Among total septa, 25 sinus septa (22.5%) were located in the anterior, 51 (45.9%) in the middle, and 35 (31.5%) in the posterior regions. The directional orientation analyses showed that 106 septa were buccopalatal, 4 were sagittal, and 1 was transverse type. The mean septal heights were 7.78 ± 2.99 and 7.89 ± 3.09 mm in the right and left sinuses, respectively. Three-dimensional CT image analyses may provide useful information that can avoid unnecessary complications during sinus augmentation procedures by facilitating adequate, timely identification of the anatomic structures inherent to the maxillary sinus
ZHANG Xiao-Xian; WEN Xiao-Yong; SUN Ye-Peng
2008-01-01
With the aid of symbolic computation system Maple, many exact solutions for the (3+1)-dimensional KP equation axe constructed by introducing an auxiliary equation and using its new Jacobi elliptic function solutions, where the new solutions are also constructed. When the modulus m → 1 and m → 0, these solutions reduce to the corresponding solitary evolution solutions and trigonometric function solutions.
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
Jiang, Zhongzheng; Zhao, Wenwen
2016-01-01
Non-equilibrium effects play a vital role in high-speed and rarefied gas flows and the accurate simulation of these flow regimes are far beyond the capability of near-local-equilibrium Navier-Stokes-Fourier equations. Eu proposed generalized hydrodynamic equations which are consistent with the laws of irreversible thermodynamics to solve this problem. Based on Eu's generalized hydrodynamics equations, a computational model, namely the nonlinear coupled constitutive relations(NCCR),was developed by R.S.Myong and applied successfully to one-dimensional shock wave structure and two-dimensional rarefied flows. In this paper, finite volume schemes, including LU-SGS time advance scheme, MUSCL interpolation and AUSMPW+ scheme, are fistly adopted to investigate NCCR model's validity and potential in three-dimensional complex hypersonic rarefied gas flows. Moreover, in order to solve the computational stability problems in 3D complex flows,a modified solution is developed for the NCCR model. Finally, the modified solu...
Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles
M. Rahman
2005-01-01
Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the
Lifton, Joseph J; Malcolm, Andrew A; McBride, John W
2015-01-01
X-ray computed tomography (CT) is a radiographic scanning technique for visualising cross-sectional images of an object non-destructively. From these cross-sectional images it is possible to evaluate internal dimensional features of a workpiece which may otherwise be inaccessible to tactile and optical instruments. Beam hardening is a physical process that degrades the quality of CT images and has previously been suggested to influence dimensional measurements. Using a validated simulation tool, the influence of spectrum pre-filtration and beam hardening correction are evaluated for internal and external dimensional measurements. Beam hardening is shown to influence internal and external dimensions in opposition, and to have a greater influence on outer dimensions compared to inner dimensions. The results suggest the combination of spectrum pre-filtration and a local gradient-based surface determination method are able to greatly reduce the influence of beam hardening in X-ray CT for dimensional metrology.
Xia, J; Samman, N; Yeung, R W; Wang, D; Shen, S G; Ip, H H; Tideman, H
2000-08-01
The purpose of this paper is to report a new technique for three-dimensional facial soft-tissue-change prediction after simulated orthognathic surgical planning. A scheme for soft tissue deformation, "Computer-assisted three-dimensional virtual reality soft tissue planning and prediction for orthognathic surgery (CASP)", is presented. The surgical planning was based on three-dimensional reconstructed CT visualization. Soft tissue changes were predicted by two newly devised algorithms: Surface Normal-based Model Deformation Algorithm and Ray Projection-based Model Deformation Algorithm. A three-dimensional color facial texture-mapping technique was also used for generating the color photo-realistic facial model. As a final result, a predicted and simulated patient's color facial model can be visualized from arbitrary viewing points.
Stochastic multiscale modeling of polycrystalline materials
Wen, Bin
Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and
Novel Assessment of Renal Motion in Children as Measured via Four-Dimensional Computed Tomography
Pai Panandiker, Atmaram S., E-mail: atmaram.pai-panandiker@stjude.org [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, TN (United States); Sharma, Shelly; Naik, Mihir H. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, TN (United States); Wu, Shengjie [Department of Biostatistics, St. Jude Children' s Research Hospital, Memphis, TN (United States); Hua, Chiaho; Beltran, Chris; Krasin, Matthew J.; Merchant, Thomas E. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, TN (United States)
2012-04-01
Objectives: Abdominal intensity-modulated radiation therapy and proton therapy require quantification of target and organ motion to optimize localization and treatment. Although addressed in adults, there is no available literature on this issue in pediatric patients. We assessed physiologic renal motion in pediatric patients. Methods and Materials: Twenty free-breathing pediatric patients at a median age of 8 years (range, 2-18 years) with intra-abdominal tumors underwent computed tomography simulation and four-dimensional computed tomography acquisition (slice thickness, 3 mm). Kidneys and diaphragms were contoured during eight phases of respiration to estimate center-of-mass motion. We quantified center of kidney mass mobility vectors in three dimensions: anteroposterior (AP), mediolateral (ML), and superoinferior (SI). Results: Kidney motion decreases linearly with decreasing age and height. The 95% confidence interval for the averaged minima and maxima of renal motion in children younger than 9 years was 5-9 mm in the ML direction, 4-11 mm in the AP direction, and 12-25 mm in the SI dimension for both kidneys. In children older than 9 years, the same confidence interval reveals a widening range of motion that was 5-16 mm in the ML direction, 6-17 mm in the AP direction, and 21-52 mm in the SI direction. Although not statistically significant, renal motion correlated with diaphragm motion in older patients. The correlation between diaphragm motion and body mass index was borderline (r = 0.52, p = 0.0816) in younger patients. Conclusions: Renal motion is age and height dependent. Measuring diaphragmatic motion alone does not reliably quantify pediatric renal motion. Renal motion in young children ranges from 5 to 25 mm in orientation-specific directions. The vectors of motion range from 5 to 52 mm in older children. These preliminary data represent novel analyses of pediatric intra-abdominal organ motion.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Platon, Ludovic; Pejoski, David; Gautreau, Guillaume; Targat, Brice; Le Grand, Roger; Beignon, Anne-Sophie; Tchitchek, Nicolas
2017-09-14
Cytometry is an experimental technique used to measure molecules expressed by cells at a single cell resolution. Recently, several technological improvements have made possible to increase greatly the number of cell markers that can be simultaneously measured. Many computational methods have been proposed to identify clusters of cells having similar phenotypes. Nevertheless, only a limited number of computational methods permits to compare the phenotypes of the cell clusters identified by different clustering approaches. These phenotypic comparisons are necessary to choose the appropriate clustering methods and settings. Because of this lack of tools, comparisons of cell cluster phenotypes are often performed manually, a highly biased and time-consuming process. We designed CytoCompare, an R package that performs comparisons between the phenotypes of cell clusters with the purpose of identifying similar and different ones, based on the distribution of marker expressions. For each phenotype comparison of two cell clusters, CytoCompare provides a distance measure as well as a p-value asserting the statistical significance of the difference. CytoCompare can import clustering results from various algorithms including SPADE, viSNE/ACCENSE, and Citrus, the most current widely used algorithms. Additionally, CytoCompare can generate parallel coordinates, parallel heatmaps, multidimensional scaling or circular graph representations to visualize easily cell cluster phenotypes and the comparison results. CytoCompare is a flexible analysis pipeline for comparing the phenotypes of cell clusters identified by automatic gating algorithms in high-dimensional cytometry data. This R package is ideal for benchmarking different clustering algorithms and associated parameters. CytoCompare is freely distributed under the GPL-3 license and is available on https://github.com/tchitchek-lab/CytoCompare. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Fish, Frank E; Beneski, John T; Ketten, Darlene R
2007-06-01
The flukes of cetaceans function in the hydrodynamic generation of forces for thrust, stability, and maneuverability. The three-dimensional geometry of flukes is associated with production of lift and drag. Data on fluke geometry were collected from 19 cetacean specimens representing eight odontocete genera (Delphinus, Globicephala, Grampus, Kogia, Lagenorhynchus, Phocoena, Stenella, Tursiops). Flukes were imaged as 1 mm thickness cross-sections using X-ray computer-assisted tomography. Fluke shapes were characterized quantitatively by dimensions of the chord, maximum thickness, and position of maximum thickness from the leading edge. Sections were symmetrical about the chordline and had a rounded leading edge and highly tapered trailing edge. The thickness ratio (maximum thickness/chord) among species increased from insertion on the tailstock to a maximum at 20% of span and then decreasing steadily to the tip. Thickness ratio ranged from 0.139 to 0.232. These low values indicate reduced drag while moving at high speed. The position of maximum thickness from the leading edge remained constant over the fluke span at an average for all species of 0.285 chord. The displacement of the maximum thickness reduces the tendency of the flow to separate from the fluke surface, potentially affecting stall patterns. Similarly, the relatively large leading edge radius allows greater lift generation and delays stall. Computational analysis of fluke profiles at 50% of span showed that flukes were generally comparable or better for lift generation than engineered foils. Tursiops had the highest lift coefficients, which were superior to engineered foils by 12-19%. Variation in the structure of cetacean flukes reflects different hydrodynamic characteristics that could influence swimming performance.
Mooney, James J; Sarwani, Nabeel; Coleman, Melissa L; Fotos, Joseph S
2017-06-01
The use of three-dimensional (3D) printing allows for creation of custom models for clinical care, education, and simulation. Medical imaging, given the significant role it plays in both clinical diagnostics and procedures, remains an important area for such education and simulation. Unfortunately, the materials appropriate for use in simulation involving radiographic or ultrasound imaging remains poorly understood. Therefore, our study was intended to explore the characteristics of readily available 3D printing materials when visualized by computed tomography (CT) and ultrasound. Seven 3D printing materials were examined in standard shapes (cube, cylinder, triangular prism) with a selection of printing methods ("open," "whole," and "solid" forms). For CT imaging, these objects were suspended in a gelatin matrix molded to match a standard human CT phantom. For ultrasound imaging, the objects were placed in acrylic forms filled with a gelatin matrix. All images were examined using OsiriX software. Computed tomography imaging revealed marked variation in materials' Hounsfield units as well as patterning and artifact. The Hounsfield unit variations revealed a number of materials suitable for simulation various human tissues. Ultrasound imaging showed echogenicity in all materials, with some variability in shadowing and posterior wall visualization. We were able to demonstrate the potential utility for 3D printing in the creation of CT and ultrasound simulation models. The similar appearance of materials via ultrasound supports their broad utility for select tissue types, whereas the more variable appearance via CT suggests greater potential for simulating differing tissues but requiring multiple printer technologies to do so.
Tyobeka, Bismark Mzubanzi
A coupled neutron transport thermal-hydraulics code system with both diffusion and transport theory capabilities is presented. At the heart of the coupled code is a powerful neutronics solver, based on a neutron transport theory approach, powered by the time-dependent extension of the well known DORT code, DORT-TD. DORT-TD uses a fully implicit time integration scheme and is coupled via a general interface to the thermal-hydraulics code THERMIX-DIREKT, an HTR-specific two dimensional core thermal-hydraulics code. Feedback is accounted for by interpolating multigroup cross sections from pre-generated libraries which are structured for user specified discrete sets of thermal-hydraulic parameters e.g. fuel and moderator temperatures. The coupled code system is applied to two HTGR designs, the PBMR 400MW and the PBMR 268MW. Steady-state and several design basis transients are modeled in an effort to discern with the adequacy of using neutron diffusion theory as against the more accurate but yet computationally expensive neutron transport theory. It turns out that there are small but significant differences in the results from using either of the two theories. It is concluded that diffusion theory can be used with a higher degree of confidence in the PBMR as long as more than two energy groups are used and that the result must be checked against lower order transport solution, especially for safety analysis purposes. The end product of this thesis is a high fidelity, state-of-the-art computer code system, with multiple capabilities to analyze all PBMR safety related transients in an accurate and efficient manner.
Jeon, Kug Jin; Park, Hyok; Lee, Hee Cheol; Kim, Kee Deog; Park, Chang Seo [Yonsei University College of Medicine, Seoul (Korea, Republic of)
2003-09-15
The purpose of this study was to report the reproducibility of intra-observer and inter-observer consistency of cephalometric measurements using three-dimensional (3D) computed tomography (CT), and the degree of difference of the cephalometric measurements. CT images of 16 adult patients with normal class I occlusion were sent to personal computer and reconstructed into 3D images using V-Works 3.5{sup TM} (Cybermed Inc., Seoul, Korea). With the internal program of V-Works 3.5{sup TM}, 12 landmarks on regular cephalograms were transformed into 21 analytic categories and measured by 2 observers and in addition, one of the observers repeated their measurements. Intra-observer difference was calculated using paired t-test, and inter-observer by two sample test. There were significant differences in the intra-observer measurements (p<0.05) in four of the categories which included ANS-Me, ANS-PNS, Cdl-GO (Lt), GoL-GoR, but with the exception of Cdl-Go (Lt), ZmL-ZmR, Zyo-Zyo, the average differences were within 2 mm of each other. The inter-observer observations also showed significant differences in the measurements of the ZmL-ZmR and Zyo-Zyo categories (p<0.05). With the exception of the Cdl-Me (Rt), ZmL-ZmR, Zyo-Zyo categories, the average differences between the two observers were within 2mm, but the ZmL-ZmR and Zyo-Zyo values differed greatly with values of 8.10 and 19.8 mm respectively. In general, 3D CT images showed greater accuracy and reproducibility, with the exception of suture areas such as Zm and Zyo, than regular cephalograms in orthodontic measurement, showing differences of less than 2 mm, therefore 3D CT images can be useful in cephalometric measurements and treatment planning.
Midtgaard, Ole-Morten
1997-12-31
This thesis considers the feasibility of doing calculations to optimize electrical machines without the need to build expensive prototypes. It deals with the construction and assessment of new, hierarchical, hexahedral edge elements for three-dimensional computations of eddy currents with the electric vector potential formulation. The new elements, five in all, gave up to second-order approximations for both the magnetic field and the current density. Theoretical arguments showed these elements to be more economical for a given polynomial order of the approximated fields than the serendipity family of nodal elements. Further it was pointed out how the support of a source field computed by using edge elements could be made very small provided that a proper spanning tree was used in the edge element mesh. This was exploited for the voltage forcing technique, where source fields were used as basis functions, with unknown total currents in voltage forced conductors as degrees of freedom. The practical assessment of the edge elements proved the accuracy to improve with increasing polynomial order, both for local and global quantities. The most economical element was, however, one giving only complete first-order approximations for both fields. Further, the edge elements turned out to be better than the nodal elements also in practice. For the voltage forcing technique, source field basis functions which had small support, resulted in large reduction of the CPU-time for solving the main equation system, compared to source fields which had large support. The new elements can be used in a p-type adaptive scheme, and they should also be applicable for other tangentially continuous field problems. 67 refs., 34 figs., 10 tabs.
Weissheimer, A; Menezes, L M; Koerich, L; Pham, J; Cevidanes, L H S
2015-09-01
The aim of this study was to validate a method for fast three-dimensional (3D) superimposition of cone beam computed tomography (CBCT) in growing patients and adults (surgical cases). The sample consisted of CBCT scans of 18 patients. For 10 patients, as the gold standard, the spatial position of the pretreatment CBCT was reoriented, saved as a reoriented volume, and then superimposed on the original image. For eight patients, four non-growing and four growing, the pre- and post-treatment scans were superimposed. Fast voxel-based superimposition was performed, with registration at the anterior cranial base. This superimposition process took 10-15s. The fit of the cranial base superimposition was verified by qualitative visualization of the semi-transparent axial, sagittal, and coronal cross-sectional slices of all corresponding anatomical structures. Virtual 3D surface models of the skull were generated via threshold segmentation, and superimposition errors in the reoriented models and the results of treatment for the treated cases were evaluated by 3D surface distances on colour-coded maps. The superimposition error of the spatial reorientation and for growing and non-growing patients was <0.5mm, which is acceptable and clinically insignificant. The voxel-based superimposition method evaluated was reproducible in different clinical conditions, rapid, and applicable for research and clinical practice. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Probst, Gabriel; Boeckmans, Bart; Dewulf, Wim; Kruth, Jean-Pierre
2016-05-01
X-ray computed tomography (CT) is slowly conquering its space in the manufacturing industry for dimensional metrology and quality control purposes. The main advantage is its non-invasive and non-destructive character. Currently, CT is the only measurement technique that allows full 3D visualization of both inner and outer features of an object through a contactless probing system. Using hundreds of radiographs, acquired while rotating the object, a 3D representation is generated and dimensions can be verified. In this research, this non-contact technique was used for the inspection of assembled components. A dental cast model with 8 implants, connected by a screwed retained bar made of titanium. The retained bar includes a mating interface connection that should ensure a perfect fitting without residual stresses when the connection is fixed with screws. CT was used to inspect the mating interfaces between these two components. Gaps at the connections can lead to bacterial growth and potential inconvenience for the patient who would have to face a new surgery to replace his/hers prosthesis. With the aid of CT, flaws in the design or manufacturing process that could lead to gaps at the connections could be assessed.
Three-dimensional segmentation of the tumor in computed tomographic images of neuroblastoma.
Deglint, Hanford J; Rangayyan, Rangaraj M; Ayres, Fábio J; Boag, Graham S; Zuffo, Marcelo K
2007-09-01
Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, and normal tissue are often intermixed. Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to therapy and in the planning of the delayed surgery for resection of the tumor. We propose methods to achieve 3-dimensional segmentation of the neuroblastic tumor. In our scheme, some of the normal structures expected in abdominal CT images are delineated and removed from further consideration; the remaining parts of the image volume are then examined for tumor mass. Mathematical morphology, fuzzy connectivity, and other image processing tools are deployed for this purpose. Expert knowledge provided by a radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are incorporated into the segmentation algorithm. In this preliminary study, the methods were tested with 10 CT exams of four cases from the Alberta Children's Hospital. False-negative error rates of less than 12% were obtained in eight of 10 exams; however, seven of the exams had false-positive error rates of more than 20% with respect to manual segmentation of the tumor by a radiologist.
The brain morphology of Homo Liujiang cranium fossil by three-dimensional computed tomography
WU XiuJie; LIU Wu; DONG Wei; QUE JieMin; WANG YanFang
2008-01-01
The Liujiang cranium is the most complete and well-preserved late Pleistocene human fossils ever unearthed in south China.Because the endocranial cavity is filled with hard stone matrix,earlier stud-ies focused only on the exterior morphology of the specimen using the traditional methods.In order to derive more information for the phyletic evaluation of the Liujiang cranium,high-resolution industrial computed tomography (CT) was used to scan the fossil,and the three-dimensional (3D) brain image was reconstructed.Compared with the endocasts of the hominin fossils (Hexian,Zhoukoudian,KNM-WT 15000,Sm 3,Kabwe,Brunn 3,Predmost) and modern Chinese,most morphological features of the Liujiang brain are in common with modern humans,including a round brain shape,bulged and wide frontal lobes,an enlarged brain height,a full orbital margin and long parietal lobes.A few differ-ences exist between Liujiang and the modern Chinese in our sample,including a strong posterior pro-jection of the occipital lobes,and a reduced cerebellar lobe.The measurement of the virtual endocast shows that the endocranial capacity of Liujiang is 1567 cc,which is in the range of Late Homo sapiens and much beyond the mean of modern humans.The brain morphology of Liujiang is assigned to Late Homo sapiens.
Three Dimensional Digital Sieving of Asphalt Mixture Based on X-ray Computed Tomography
Chichun Hu
2017-07-01
Full Text Available In order to perform three-dimensional digital sieving based on X-ray computed tomography images, the definition of digital sieve size (DSS was proposed, which was defined as the minimum length of the minimum bounding squares of all possible orthographic projections of an aggregate. The corresponding program was developed to reconstruct aggregate structure and to obtain DSS. Laboratory experiments consisting of epoxy-filled aggregate specimens were conducted to investigate the difference between mechanical sieve analysis and the digital sieving technique. It was suggested that concave surface of aggregate was the possible reason for the disparity between DSS and mechanical sieve size. A comparison between DSS and equivalent diameter was also performed. Moreover, the digital sieving technique was adopted to evaluate the gradation of stone mastic asphalt mixtures. The results showed that the closest proximity of the laboratory gradation curve was achieved by calibrated DSS, among gradation curves based on calibrated DSS, un-calibrated DSS and equivalent diameter.
Tanabe, Hiroki; Ito, Takahiro; Inaba, Yuhei; Ando, Katsuyoshi; Nomura, Yoshiki; Ueno, Nobuhiro; Kashima, Shin; Moriichi, Kentaro; Fujiya, Mikihiro; Okumura, Toshikatsu
2017-01-01
Endoscopic retrograde ileography (ERIG) is developed in our institute and applied clinically for the diagnosis and assessment of the Crohn's disease activity. We have further improved the technique using 3-dimensional - computed tomography enteroclysis (3D-CTE) and conducted a retrospective study to determine the feasibility and the diagnostic value of endoscopic retrograde 3D-CTE (ER 3D-CTE) in Crohn's disease patients in a state of remission. Thirteen Crohn's patients were included in this pilot study. CTE was performed after the infusion of air or CO2 through the balloon tube following conventional colonoscopy. The primary endpoint of this study was to assess the safety of this method. Secondarily, the specific findings of Crohn's disease and length of the visualized small intestine were assessed. The procedures were completed without any adverse events. Gas passed through the small intestine and enterographic images were obtained in 10 out of 13 cases, but, in the remaining patients, insertion of the balloon tubes into the terminal ileum failed. Various features specific to Crohn's disease were visualized using ER 3D-CTE. A cobble stone appearance or hammock-like malformation was specific and effective for diagnosing Crohn's disease and the features of anastomosis after the surgical operations were also well described. Therefore, this technique may be useful after surgery. In this study, ER 3D-CTE was performed safely in Crohn's disease patients and may be used for the diagnosis and follow-up of this disease.
Computed tomography of Crohn's disease: The role of three dimensional technique.
Raman, Siva P; Horton, Karen M; Fishman, Elliot K
2013-05-28
Crohn's disease, a transmural inflammatory bowel disease, remains a difficult entity to diagnose clinically. Over the last decade, multidetector computed tomography (CT) has become the method of choice for non-invasive evaluation of the small bowel, and has proved to be of significant value in the diagnosis of Crohn's disease. Advancements in CT enterography protocol design, three dimensional (3-D) post-processing software, and CT scanner technology have allowed increasing accuracy in diagnosis, and the acquisition of studies at a much lower radiation dose. The cases in this review will illustrate that the use of 3-D technique, proper enterography protocol design, and a detailed understanding of the different manifestations of Crohn's disease are all critical in properly diagnosing the full range of possible complications in Crohn's patients. In particular, CT enterography has proven to be effective in identifying involvement of the small and large bowel (including active inflammation, stigmata of chronic inflammation, and Crohn's-related bowel neoplasia) by Crohn's disease, as well as the extra-enteric manifestations of the disease, including fistulae, sinus tracts, abscesses, and urologic/hepatobiliary/osseous complications. Moreover, the proper use of 3-D technique (including volume rendering and maximum intensity projection) as a routine component of enterography interpretation can play a vital role in improving diagnostic accuracy.
Misirlioglu, Melda; Adisen, Mehmet Zahit; Yardimci, Selmi [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kirikkale University, Kirikkale (Turkmenistan); Nalcaci, Rana [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara (Turkmenistan)
2013-09-15
Tonsilloliths are calcifications found in the crypts of the palatal tonsils and can be detected on routine panoramic examinations. This study was performed to highlight the benefits of cone-beam computed tomography (CBCT) in the diagnosis of tonsilloliths appearing bilaterally on panoramic radiographs. The sample group consisted of 7 patients who had bilateral radiopaque lesions at the area of the ascending ramus on panoramic radiographs. CBCT images for every patient were obtained from both sides of the jaw to determine the exact locations of the lesions and to rule out other calcifications. The calcifications were evaluated on the CBCT images using Ez3D2009 software. Additionally, the obtained images in DICOM format were transferred to ITK SNAP 2.4.0 pc software for semiautomatic segmentation. Segmentation was performed using contrast differences between the soft tissues and calcifications on grayscale images, and the volume in mm{sup 3} of the segmented three dimensional models were obtained. CBCT scans revealed that what appeared on panoramic radiographs as bilateral images were in fact unilateral lesions in 2 cases. The total volume of the calcifications ranged from 7.92 to 302.5mm{sup 3}. The patients with bilaterally multiple and large calcifications were found to be symptomatic. The cases provided the evidence that tonsilloliths should be considered in the differential diagnosis of radiopaque masses involving the mandibular ramus, and they highlight the need for a CBCT scan to differentiate pseudo- or ghost images from true bilateral pathologies.
Ono, Ichiro (Fukushima Medical Coll. (Japan)); Ohura, Takehiko; Kawashima, Kunihiro (and others)
1992-02-01
The classification of hemifacial microsomia (HFM) and the terms of its severity has been attempted by Pruzansky, David et al., and by the authors. However, with regard to the deformity of craniofacial bones in bilateral craniofacial microsomia, further investigation is still required. The authors report on the results of three-dimensional computer tomography (3D-CT) conducted in patients with a bilateral auricular anomaly and/or facial asymmetry. Further, so as to determine the site and severity of deformities in a bilateral craniofacial microsomia more objectively, 3D measurements of various points were carried out and analyzed by means of a wire frame model called a 'skeletogram'. Through the above methods, and despite the difficulty that no normal facial side exists in a bilateral craniofacial microsomia, the authors found that clinical cases of a bilateral craniofacial microsomia include a wide variety of pathological conditions. Of interest, in many cases of bilateral craniofacial microsomia, details of the craniofacial bones on the left and the right side often differ. In addition to the above findings, the craniofacial deformities of one craniofacial microsomia patient resembled those of the Treacher Collins syndrome, which appears to indicate some pathological relationship between craniofacial microsomia and this syndrome. (author).
Isaacs, Kristin K; Schlesinger, R B; Martonen, Ted B
2006-01-01
Simulation of the dynamics and disposition of inhaled particles within human lungs is an invaluable tool in both the development of inhaled pharmacologic drugs and the risk assessment of environmental particulate matter (PM). The goal of the present focused study was to assess the utility of three-dimensional computational fluid dynamics (CFD) models in studying the local deposition patterns of PM in respiratory airways. CFD models were validated using data from published experimental studies in human lung casts. The ability of CFD to appropriately simulate trends in deposition patterns due to changing ventilatory conditions was specifically addressed. CFD simulations of airflow and particle motion were performed in a model of the trachea and main bronchi using Fluent Inc.'s FIDAP CFD software. Particle diameters of 8 microm were considered for input flow rates of 15 and 60 L/min. CFD was able to reproduce the observed spatial heterogeneities of deposition within the modeled bifurcations, and correctly predicted the "hot-spots" of particle deposition on carinal ridges. The CFD methods also predicted observed differences in deposition for high-versus-low flow rates. CFD models may provide an efficient means of studying the complex effects of airway geometry, particle characteristics, and ventilatory parameters on particle deposition and therefore aid in the design of human subject experiments.
Meng Liu; Cheng-yuan Wu; Yu-guang Liu; Hong-wei Wang; Fan-gang Meng
2005-01-01
Objective To evaluate the effectiveness of three-dimensional computed tomography (3D-CT) guided radiofrequency trige minal rhizotomy (RF-TR) in treatment of idiopathic trigeminal neuralgia (ITN). Methods From 1999 to 2001, 18 patients with ITN were treated with percutaneous controlled RF-TR. Intraoperative 3D-CT scanning was performed to guide the trajectory of the puncture. After correction of the needle tip according to the CT scans and stimulation effects, 2 to 5 lesions were made for a duration of 60-90 seconds at a temperature of 60℃ to 75℃ depend ing on the pain distribution and the age of patient. Results The needles located in foramen ovale. Pain alleviated immediately with no serious complication in all patients. The patients were followed up for an average of 31.5 months (range 24-41 months). Acute pain relief was experienced by 17 patients after the procedure, reaching an initial success rate of 94.4%. Early (＜ 6 months) pain recurrence was observed in 2 patients (11.1%), whereas late (＞ 6 months) recurrence was reported in 3 patients (16.7%). Thirteen patients had complete pain control, with no need for medication thereafter. Five cases experienced partial pain relief, but required medication at a lower dose than in the preoperative period. Conclusion 3D-CT foramen ovale locations can raise the successful rate of puncture, enhance the safety, and reduce the incidence rate of complication.
Tanabe, Yuki; Kido, Teruhito; Kurata, Akira; Sawada, Shun; Suekuni, Hiroshi; Kido, Tomoyuki; Yokoi, Takahiro; Miyagawa, Masao; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon City, Ehime (Japan); Uetani, Teruyoshi; Inoue, Katsuji [Ehime University Graduate School of Medicine, Department of Cardiology, Pulmonology, Hypertension and Nephrology, Toon City, Ehime (Japan)
2017-04-15
To evaluate the feasibility of three-dimensional (3D) maximum principal strain (MP-strain) derived from cardiac computed tomography (CT) for detecting myocardial infarction (MI). Forty-three patients who underwent cardiac CT and magnetic resonance imaging (MRI) were retrospectively selected. Using the voxel tracking of motion coherence algorithm, the peak CT MP-strain was measured using the 16-segment model. With the trans-mural extent of late gadolinium enhancement (LGE) and the distance from MI, all segments were classified into four groups (infarcted, border, adjacent, and remote segments); infarcted and border segments were defined as MI with LGE positive. Diagnostic performance of MP-strain for detecting MI was compared with per cent systolic wall thickening (%SWT) assessed by MRI using receiver-operating characteristic curve analysis at a segment level. Of 672 segments excluding16 segments influenced by artefacts, 193 were diagnosed as MI. Sensitivity and specificity of peak MP-strain to identify MI were 81 % [95 % confidence interval (95 % CI): 74-88 %] and 86 % (81-92 %) compared with %SWT: 76 % (60-95 %) and 68 % (48-84 %), respectively. The area under the curve of peak MP-strain was superior to %SWT [0.90 (0.87-0.93) vs. 0.80 (0.76-0.83), p < 0.05]. CT MP-strain has a potential to provide incremental value to coronary CT angiography for detecting MI. (orig.)
Visualization of torn anterior cruciate ligament using 3-dimensional computed tomography
Hiroaki Uozumi
2013-07-01
Full Text Available Recently, a remnant-preserving anterior cruciate ligament (ACL reconstruction technique has been developed. However, the pre-operative condition of remnant ACL is occasionally difficult to evaluate by magnetic resonance imaging. The purpose of this study is to evaluate the accuracy of pre-operative visualization of remnant ACL using three-dimensional computed tomography (3D-CT. The remnant ACL in 25 patients was examined by 3D-CT before ACL reconstruction surgery. Findings on 3D-CT images and arthroscopy were compared. The 3D-CT images were classified into 4 groups: Group A, remnant fibers attached to the posterior cruciate ligament (PCL; Group B, those located between the PCL and the lateral wall; Group C, those attached to the lateral wall; and Group D, no identifiable remnant fibers on the tibial side. These groups were made up of 4, 3, 9 and 9 patients, respectively. Findings on 3D-CT images were identical to those during arthroscopy in 20 of 25 cases (80%. Remnant ACL can be accurately evaluated using 3D-CT in 80% of cases of torn ACL. This novel method is a useful technique for pre-operative assessment of remnant ACL.
M Chokkalingam
2011-01-01
Full Text Available Aim: The purpose of this study was to evaluate the adequacy of three obturation techniques namely lateral condensation, EQ Fil (backfill obturation and thermafil (core carrier obturation techniques using three-dimensional (3D helical computed tomography (CT by volume rendering method. Materials and Methods: Thirty freshly extracted teeth were randomly divided into three groups of 10 teeth each. Biomechanical preparation was done in all the teeth using rotary instruments. All three sets of teeth were placed in helical CT slice scanner and were imaged before obturation. The three sets were then obturated by following methods: Group I: lateral condensation, Group II: EQ Fil (backfill and Group III: thermafil (core carrier obturation.Volume of the pulp chamber and gutta-percha after obturation were calculated using volume rendering technique and adequacy of the obturation techniques were calculated. Statistical Analysis Used: One-way ANOVA and Multiple-Range Tukey Test by Tukey-HSD procedure Results: Mean change in lateral condensation (0.005±0.002 was significantly higher than that of thermafil obturation (0.002±0.001 [P<0.05]. Conclusions: Conventional lateral condensation technique showed maximal inadequacy of obturation and thermafil obturation technique showed the least inadequacy of obturation when the volume of the specimens were calculated and reconstructed
Reproducibility of imaging skull anatomic landmarks utilizing three-dimensional computed tomography
Sugawara, Yasushi; Harii, Kiyonori (Tokyo Univ. (Japan). Faculty of Medicine); Hirabayashi, Shinichi
1994-05-01
The study investigated the reproducibility of locating specific anatomic landmarks, utilizing computed tomography (CT), for the purpose of assigning accurate coordinates on the skull. Three-dimensional (3-D) CT data, obtained by scanning a dry adult skull, were processed using a multi-planar reconstruction (MPR) system. Each landmark was identified five times by the same technician, and the average distances between points identifying the same landmark were calculated. The 15 landmarks studied were the infra-orbital foramina, the external auditory meatus, the foramina rotundum, the foramina ovale, the optic canals, anterior crinoid processes, anterior nasal spine, crista galli, and the sella turcica. Three additional artificial markers placed in occlusal dental splints were also examined. The crinoid processes were identified with the highest degree of accuracy. The crista galli and optic canals were also located with reproducible results. The standard deviation calculated from the fine attempts to locate the artificial markers was smaller than that calculated from attempts to identify any of the landmarks. This implies that coordinates on the craniofacial bones should be defined using artificial markers rather than bony landmarks. Artificial markers placed in occlusal dental splints easily can be applied clinically. Complicated facial bone contours should be analyzed mathematically. In clinical setting, these points were found to be reproducible in 15 bony landmarks on the skull. (N.K.).
Gastelum, Alfonso; Mata, Lucely; Brito-de-la-Fuente, Edmundo; Delmas, Patrice; Vicente, William; Salinas-Vázquez, Martín; Ascanio, Gabriel; Marquez, Jorge
2016-03-01
We aimed to provide realistic three-dimensional (3D) models to be used in numerical simulations of peristaltic flow in patients exhibiting difficulty in swallowing, also known as dysphagia. To this end, a 3D model of the upper gastrointestinal tract was built from the color cryosection images of the Visible Human Project dataset. Regional color heterogeneities were corrected by centering local histograms of the image difference between slices. A voxel-based model was generated by stacking contours from the color images. A triangle mesh was built, smoothed and simplified. Visualization tools were developed for browsing the model at different stages and for virtual endoscopy navigation. As result, a computer model of the esophagus and the stomach was obtained, mainly for modeling swallowing disorders. A central-axis curve was also obtained for virtual navigation and to replicate conditions relevant to swallowing disorders modeling. We show renderings of the model and discuss its use for simulating swallowing as a function of bolus rheological properties. The information obtained from simulation studies with our model could be useful for physicians in selecting the correct nutritional emulsions for patients with dysphagia.
Computer model of two-dimensional solute transport and dispersion in ground water
Konikow, Leonard F.; Bredehoeft, J.D.
1978-01-01
This report presents a model that simulates solute transport in flowing ground water. The model is both general and flexible in that it can be applied to a wide range of problem types. It is applicable to one- or two-dimensional problems involving steady-state or transient flow. The model computes changes in concentration over time caused by the processes of convective transport, hydrodynamic dispersion, and mixing (or dilution) from fluid sources. The model assumes that the solute is non-reactive and that gradients of fluid density, viscosity, and temperature do not affect the velocity distribution. However, the aquifer may be heterogeneous and (or) anisotropic. The model couples the ground-water flow equation with the solute-transport equation. The digital computer program uses an alternating-direction implicit procedure to solve a finite-difference approximation to the ground-water flow equation, and it uses the method of characteristics to solve the solute-transport equation. The latter uses a particle- tracking procedure to represent convective transport and a two-step explicit procedure to solve a finite-difference equation that describes the effects of hydrodynamic dispersion, fluid sources and sinks, and divergence of velocity. This explicit procedure has several stability criteria, but the consequent time-step limitations are automatically determined by the program. The report includes a listing of the computer program, which is written in FORTRAN IV and contains about 2,000 lines. The model is based on a rectangular, block-centered, finite difference grid. It allows the specification of any number of injection or withdrawal wells and of spatially varying diffuse recharge or discharge, saturated thickness, transmissivity, boundary conditions, and initial heads and concentrations. The program also permits the designation of up to five nodes as observation points, for which a summary table of head and concentration versus time is printed at the end of the
Yin, L.; Stark, D. J.; Albright, B. J.
2016-10-01
Laser-ion acceleration via relativistic induced transparency provides an effective means to accelerate ions to tens of MeV/nucleon over distances of 10s of μm. These ion sources may enable a host of applications, from fast ignition and x-rays sources to medical treatments. Understanding whether two-dimensional (2D) PIC simulations can capture the relevant 3D physics is important to the development of a predictive capability for short-pulse laser-ion acceleration and for economical design studies for applications of these accelerators. In this work, PIC simulations are performed in 3D and in 2D where the direction of the laser polarization is in the simulation plane (2D-P) and out-of-plane (2D-S). Our studies indicate modeling sensitivity to dimensionality and laser polarization. Differences arise in energy partition, electron heating, ion peak energy, and ion spectral shape. 2D-P simulations are found to over-predict electron heating and ion peak energy. The origin of these differences and the extent to which 2D simulations may capture the key acceleration dynamics will be discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.
Bauer, Georg; Gamnitzer, Peter [Institute for Computational Mechanics, Technische Universität München, Boltzmannstr. 15, 85747 Garching (Germany); Gravemeier, Volker, E-mail: vgravem@lnm.mw.tum.de [Institute for Computational Mechanics, Technische Universität München, Boltzmannstr. 15, 85747 Garching (Germany); Emmy Noether Research Group “Computational Multiscale Methods for Turbulent Combustion”, Technische Universität München, Boltzmannstr. 15, 85747 Garching (Germany); Wall, Wolfgang A. [Institute for Computational Mechanics, Technische Universität München, Boltzmannstr. 15, 85747 Garching (Germany)
2013-10-15
Highlights: •We present a computational method for coupled multi-ion transport in turbulent flow. •The underlying formulation is a variational multiscale finite element method. •It is combined with the isogeometric concept for electrochemical systems. •Coupled multi-ion transport in fully turbulent Taylor–Couette flow is simulated. •This example is an important model problem for rotating cylinder electrodes. -- Abstract: Electrochemical processes, such as electroplating of large items in galvanic baths, are often coupled to turbulent flow. In this study, we propose an isogeometric residual-based variational multiscale finite element method for multi-ion transport in dilute electrolyte solutions under turbulent flow conditions. In other words, this means that the concepts of isogeometric discretization and variational multiscale methods are successfully combined for developing a method capable of simulating the challenging problem of coupled multi-ion transport in turbulent flow. We present a comprehensive three-dimensional computational method taking into account, among others, coupled convection–diffusion-migration equations subject to an electroneutrality constraint in combination with phenomenological electrode-kinetics modeling. The electrochemical subproblem is one-way coupled to turbulent incompressible flow via convection. Ionic mass transfer in turbulent Taylor–Couette flow is investigated, representing an important model problem for rotating-cylinder-electrode configurations. Multi-ion transport as considered here is an example for mass transport at high Schmidt number (Sc=1389). An isogeometric discretization is especially advantageous for the present problem, since (i) curved boundaries can be represented exactly, and (ii) it has been proven to provide very accurate solutions for flow quantities when being applied in combination with residual-based variational multiscale modeling. We demonstrate that the method is robust and provides
J. Doornberg; A. Lindenhovius; P. Kloen; C.N. van Dijk; D. Zurakowski; D. Ring
2006-01-01
Background: Complex fractures of the distal part of the humerus can be difficult to characterize on plain radiographs and two-dimensional computed tomography scans. We tested the hypothesis that three-dimensional reconstructions of computed tomography scans improve the reliability and accuracy of fr
Truong, T. K.; Liu, K. Y.; Reed, I. S.
1983-01-01
It is pointed out that the two-dimensional cyclic convolution is a useful tool for many two-dimensional digital signal processing applications. Two important applications are related to spaceborne high-resolution synthetic aperture radar (SAR) processing and image processing. Nussbaumer and Quandalle (1978) showed that a radix-2 polynomial transform analogous to the conventional radix-2 FFT algorithm can be used to compute a two-dimensional cyclic convolution. On the basis of results reported by Arambepola and Rayner (1979), a radix-2 polynomial transform can be defined to compute a multidimensional cyclic convolution. Truong et al. (1981) used the considered ideas together with the Chinese Theorem to further reduce the complexity of the radix-2 fast polynomial transform (FPT). Reed et al. (1981) demonstrated that such a new FPT algorithm is significantly faster than the FFT algorithm for computing a two-dimensional convolution. In the present investigation, a parallel-pipeline architecture is considered for implementing the FPT developed by Truong et al.
A spectral multiscale hybridizable discontinuous Galerkin method for second order elliptic problems
Efendiev, Yalchin R.
2015-08-01
We design a multiscale model reduction framework within the hybridizable discontinuous Galerkin finite element method. Our approach uses local snapshot spaces and local spectral decomposition following the concept of Generalized Multiscale Finite Element Methods. We propose several multiscale finite element spaces on the coarse edges that provide a reduced dimensional approximation for numerical traces within the HDG framework. We provide a general framework for systematic construction of multiscale trace spaces. Using local snapshots, we avoid high dimensional representation of trace spaces and use some local features of the solution space in constructing a low dimensional trace space. We investigate the solvability and numerically study the performance of the proposed method on a representative number of numerical examples.
Francisco A. Arenhart
2016-11-01
Full Text Available This paper presents a comparison of surface-based and image-based quality metrics for dimensional X-ray computed tomography (CT data. The chosen metrics are used to characterize two key aspects in acquiring signals with CT systems: the loss of information (blurring and the adding of unwanted information (noise. A set of structured experiments was designed to test the response of the metrics to different influencing factors. It is demonstrated that, under certain circumstances, the results of both types of metrics become conflicting, emphasizing the importance of using surface information for evaluating the quality dimensional CT data. Specific findings using both types of metrics are also discussed.
Multiscale macromolecular simulation: role of evolving ensembles.
Singharoy, A; Joshi, H; Ortoleva, P J
2012-10-22
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin time step is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers.
Dong, Yuefu; Mou, Zhifang; Huang, Zhenyu; Hu, Guanghong; Dong, Yinghai; Xu, Qingrong
2013-10-01
Three-dimensional reconstruction of human body from a living subject can be considered as the first step toward promoting virtual human project as a tool in clinical applications. This study proposes a detailed protocol for building subject-specific three-dimensional model of knee joint from a living subject. The computed tomography and magnetic resonance imaging image data of knee joint were used to reconstruct knee structures, including bones, skin, muscles, cartilages, menisci, and ligaments. They were fused to assemble the complete three-dimensional knee joint. The procedure was repeated three times with respect to three different methods of reference landmarks. The accuracy of image fusion in accordance with different landmarks was evaluated and compared with each other. The complete three-dimensional knee joint, which included 21 knee structures, was accurately developed. The choice of external or anatomical landmarks was not crucial to improve image fusion accuracy for three-dimensional reconstruction. Further work needs to be done to explore the value of the reconstructed three-dimensional knee joint for its biomechanics and kinematics.
Multiscale Modeling in the Clinic: Drug Design and Development.
Clancy, Colleen E; An, Gary; Cannon, William R; Liu, Yaling; May, Elebeoba E; Ortoleva, Peter; Popel, Aleksander S; Sluka, James P; Su, Jing; Vicini, Paolo; Zhou, Xiaobo; Eckmann, David M
2016-09-01
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multiscale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multiscale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions to guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multiscale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical and computational techniques employed for multiscale modeling approaches used in pharmacometric and systems pharmacology models in drug development and present several examples illustrating the current state-of-the-art models for (1) excitable systems and applications in cardiac disease; (2) stem cell driven complex biosystems; (3) nanoparticle delivery, with applications to angiogenesis and cancer therapy; (4) host-pathogen interactions and their use in metabolic disorders, inflammation and sepsis; and (5) computer-aided design of nanomedical systems. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multiscale models.
Multiscale Phenomenology of the Cosmic Web
Aragon-Calvo, Miguel A; Jones, Bernard J T
2010-01-01
We analyze the structure and connectivity of the distinct morphologies that define the Cosmic Web. With the help of our Multiscale Morphology Filter (MMF), we dissect the matter distribution of a cosmological $\\Lambda$CDM N-body computer simulation into cluster, filaments and walls. The MMF is ideally suited to adress both the anisotropic morphological character of filaments and sheets, as well as the multiscale nature of the hierarchically evolved cosmic matter distribution. The results of our study may be summarized as follows: i).- While all morphologies occupy a roughly well defined range in density, this alone is not sufficient to differentiate between them given their overlap. Environment defined only in terms of density fails to incorporate the intrinsic dynamics of each morphology. This plays an important role in both linear and non linear interactions between haloes. ii).- Most of the mass in the Universe is concentrated in filaments, narrowly followed by clusters. In terms of volume, clusters only r...
Multiscale study for stochastic characterization of shale samples
Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad; Piri, Mohammad
2016-03-01
Characterization of shale reservoirs, which are typically of low permeability, is very difficult because of the presence of multiscale structures. While three-dimensional (3D) imaging can be an ultimate solution for revealing important complexities of such reservoirs, acquiring such images is costly and time consuming. On the other hand, high-quality 2D images, which are widely available, also reveal useful information about shales' pore connectivity and size. Most of the current modeling methods that are based on 2D images use limited and insufficient extracted information. One remedy to the shortcoming is direct use of qualitative images, a concept that we introduce in this paper. We demonstrate that higher-order statistics (as opposed to the traditional two-point statistics, such as variograms) are necessary for developing an accurate model of shales, and describe an efficient method for using 2D images that is capable of utilizing qualitative and physical information within an image and generating stochastic realizations of shales. We then further refine the model by describing and utilizing several techniques, including an iterative framework, for removing some possible artifacts and better pattern reproduction. Next, we introduce a new histogram-matching algorithm that accounts for concealed nanostructures in shale samples. We also present two new multiresolution and multiscale approaches for dealing with distinct pore structures that are common in shale reservoirs. In the multiresolution method, the original high-quality image is upscaled in a pyramid-like manner in order to achieve more accurate global and long-range structures. The multiscale approach integrates two images, each containing diverse pore networks - the nano- and microscale pores - using a high-resolution image representing small-scale pores and, at the same time, reconstructing large pores using a low-quality image. Eventually, the results are integrated to generate a 3D model. The methods
Three dimensional computer simulation for NO{sub x} emission in Oestrand recovery boiler
Tao Lixin [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Metallurgy
2000-05-01
This report presents the results achieved in a research project (no. 663021) financed by NUTEK and AAFORSK. The objective of this project is to develop and validate a proper NO{sub x} model for black liquor recovery boilers. The project has been carried out through a close co-operation between the division of Heat and Furnace Technology at KTH, the combustion chemistry research group at Aabo Akademi University in Finland and AaF Energikonsult Stockholm AB. As a result from this project, a NO{sub x} model is developed as a new component in the general framework of the recovery boiler model: STAR-RBM. STAR-RBM is a fundamental three-dimensional computer model for the simulation of the flow, heat transfer, combustion and NO{sub x} emission in a black liquor recovery boiler. It is constructed around a general-purpose Computational Fluid Dynamics (CFD) package: STAR-CD. In this report, a three-dimensional numerical simulation for NO{sub x} emission in Oestrand recovery boiler is described and discussed. The NO{sub x} model developed in this project considers the NO formation from fuel-NO and thermal-NO mechanisms. The fuel-NO mechanism is proposed by Aabo Akademi University. It is assumed that the fuel nitrogen in black liquor is released via either devolatilization or char combustion. It has been found by laboratory studies that approximately 70% of the fuel nitrogen is released during devolatilization, mainly as NH{sub 3} and N{sub 2}. The overall gas phase reactions for fuel-NO chemistry is based on that of Mitchell and Tarbell. It has been found in this work that the kinetic rates of the fuel-NO reactions are generally higher than the rate of turbulence mixing. Thus, the eddy dissipation concept proposed by Magnussen and Hjertager is applied to calculate the rate of fuel-NO formation. The thermal-NO mechanism is based on an extended Zeldovich mechanism. Invoking a steady-state approximation for N-atom and assuming that the O-atom concentration may be calculated from
Vlachopoulos, Lazaros; Dünner, Celestine; Gass, Tobias; Graf, Matthias; Goksel, Orcun; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp
2016-02-01
In the presence of severe osteoarthritis, osteonecrosis, or proximal humeral fracture, the contralateral humerus may serve as a template for the 3-dimensional (3D) preoperative planning of reconstructive surgery. The purpose of this study was to develop algorithms for performing 3D measurements of the humeral anatomy and further to assess side-to-side (bilateral) differences in humeral head retrotorsion, humeral head inclination, humeral length, and humeral head radius and height. The 3D models of 140 paired humeri (70 cadavers) were extracted from computed tomographic data. Geometric characteristics quantifying the humeral anatomy in 3D were determined in a semiautomatic fashion using the developed computer algorithms. The results between the sides were compared for evaluating bilateral differences. The mean bilateral difference of the humeral retrotorsion angle was 6.7° (standard deviation [SD], 5.7°; range, -15.1° to 24.0°; P = .063); the mean side difference of the humeral head inclination angle was 2.3° (SD, 1.8°; range, -5.1° to 8.4°; P = .12). The side difference in humeral length (mean, 2.9 mm; SD, 2.5 mm; range, -8.7 mm to 10.1 mm; P = .04) was significant. The mean side difference in the head sphere radius was 0.5 mm (SD, 0.6 mm; range, -3.2 mm to 2.2 mm; P = .76), and the mean side difference in humeral head height was 0.8 mm (SD, 0.6 mm; range, -2.4 mm to 2.4 mm; P = .44). The contralateral anatomy may serve as a reliable reconstruction template for humeral length, humeral head radius, and humeral head height if it is analyzed with 3D algorithms. In contrast, determining humeral head retrotorsion and humeral head inclination from the contralateral anatomy may be more prone to error. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Maidment, Susannah C R; Bates, Karl T; Falkingham, Peter L; VanBuren, Collin; Arbour, Victoria; Barrett, Paul M
2014-08-01
Ornithischian dinosaurs were primitively bipedal with forelimbs modified for grasping, but quadrupedalism evolved in the clade on at least three occasions independently. Outside of Ornithischia, quadrupedality from bipedal ancestors has only evolved on two other occasions, making this one of the rarest locomotory transitions in tetrapod evolutionary history. The osteological and myological changes associated with these transitions have only recently been documented, and the biomechanical consequences of these changes remain to be examined. Here, we review previous approaches to understanding locomotion in extinct animals, which can be broadly split into form-function approaches using analogy based on extant animals, limb-bone scaling, and computational approaches. We then carry out the first systematic attempt to quantify changes in locomotor muscle function in bipedal and quadrupedal ornithischian dinosaurs. Using three-dimensional computational modelling of the major pelvic locomotor muscle moment arms, we examine similarities and differences among individual taxa, between quadrupedal and bipedal taxa, and among taxa representing the three major ornithischian lineages (Thyreophora, Ornithopoda, Marginocephalia). Our results suggest that the ceratopsid Chasmosaurus and the ornithopod Hypsilophodon have relatively low moment arms for most muscles and most functions, perhaps suggesting poor locomotor performance in these taxa. Quadrupeds have higher abductor moment arms than bipeds, which we suggest is due to the overall wider bodies of the quadrupeds modelled. A peak in extensor moment arms at more extended hip angles and lower medial rotator moment arms in quadrupeds than in bipeds may be due to a more columnar hindlimb and loss of medial rotation as a form of lateral limb support in quadrupeds. We are not able to identify trends in moment arm evolution across Ornithischia as a whole, suggesting that the bipedal ancestry of ornithischians did not constrain the
Cui, Guoqiang; Jew, Brian; Hong, Julian C; Johnston, Eric W; Loo, Billy W; Maxim, Peter G
2012-11-08
The aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts.
Extensor tendon rupture and three-dimensional computed tomography imaging of the rheumatoid wrist
Abe, Asami; Ishikawa, Hajime; Murasawa, Akira; Nakazono, Kiyoshi [Niigata Rheumatic Center, Department of Rheumatology, Shibata, Niigata (Japan)
2010-04-15
Extensor tendon rupture on the dorsum of the wrist is commonly seen in patients with rheumatoid arthritis (RA). The diagnosis of tendon rupture is usually straightforward, but it is sometimes difficult in the hand with complex deformity. The purposes of this study were to investigate the reliability of three-dimensional computed tomography (3DCT) imaging of extensor tendons in the rheumatoid wrist and in the normal wrist and to clarify the validity of its clinical application to the diagnosis of tendon rupture in the rheumatoid wrist. Preoperative 3DCT images of 48 wrists of 45 patients with RA and 3DCT images of 38 wrists of 38 healthy volunteers were reviewed retrospectively by six orthopaedic surgeons who were unaware of all other study data. Extensor tendon rupture was verified by operation on 20 rheumatoid wrists. Regarding interobserver and intra-observer reliabilities of 3DCT imaging of the extensor tendons, agreement with respect to tendon rupture in this study group was high, and Cohen's kappa ({kappa}) coefficient was variable, depending on the individual tendon. Positive predictive value (PPV) of tendon rupture in the extensor digiti minimi (EDM), extensor digitorum communis (EDC) V and IV and extensor pollicis longs (EPL) tendons was more than 60%, but those for the other extensor tendons were less than 50%. Negative predictive value (NPV) was more than 96% in all extensor tendons, in both rheumatoid and normal wrists. Extensor tendons in normal and rheumatoid wrists were well depicted by 3DCT imaging. In the rheumatoid wrists, extensors of the ring and little fingers and the thumb were depicted more accurately than those to the other fingers. 3DCT imaging was clinically applicable to wrists for which it was difficult to diagnose by physical examination a definite cause for the loss of extension of the fingers. (orig.)
Gjertsen, Oe.; Schellhorn, T.; Nakstad, P.H. (Dept. of Neuroradiology, Division of Medical Services, Ullevl Univ. Hospital, Univ. of Oslo, Oslo (Norway))
2008-11-15
Background: Osteoporotic sacral insufficiency fractures are usually spontaneous or caused by discrete traumas. The fluoroscopic anatomy of the sacrum can be difficult to understand, and this is why sacroplasty is considered more challenging than ordinary vertebroplasties. Purpose: To demonstrate the planning of the procedure and the effectiveness of treatment with sacroplasty by means of three-dimensional computed tomography (3D CT) by combining multiplanar reconstructions (MPR) and volume-rendering technique (VRT). Material and Methods: Five elderly, osteoporotic patients with intense pelvic and hip pain underwent weeks of inconclusive clinical and radiological diagnostic efforts. Correct diagnosis was finally attained with magnetic resonance imaging (MRI) and CT. Plain radiographs rarely show fractures, and MR or CT examinations are necessary to demonstrate longitudinal fractures. The procedures were performed with digital biplane equipment using preoperative 3D CT planning procedures. Polymethyl methacrylate (PMMA) was injected to fill the fracture sites. Results: The fractures were successfully treated with sacroplasty using PMMA. A new technique, which involves placing the needles along the long axis of the sacrum, was optimized to the individual patients' fractures and sacral anatomy by meticulous planning on a workstation with 3D CT data sets. It was technically successful in all five cases. Four of the five patients had sustained pain relief. Conclusion: Sacral insufficiency fractures are not uncommon and should be considered in the elderly population with low back pain. Sacroplasty using the optimized 'long-axis technique' gave almost immediate pain relief for all five patients in our study material. No complications were observed.
Three-dimensional computer modeling of birds proepicardium on different stages of embryogenesis
Pototskaya O.Yu.
2009-01-01
Full Text Available As the proepicardium is the source of many cell populations of the mature heart, including cell components of coronary vessels, its investigation became actual last time. It is known, that there are some differences between birds and mammalian proepicardium; one of them consists in the way, in which they contact to myocardium: mammalian proepicardium produce vesicles, which contact to atrioventricular groove, wile in birds there is no vesicles and whole protrusions of proepicardium attach to the heart. But, recent years it became evident, that in rat proepicardium there are no vesicles during all stages of its existence, and in birds there is a proepicardium-like structure producing vesicles, which attach to the heart. Thus, the goal of our research was to characterize changes in the shape of proepicardium during bird’s embryogenesis. We used Cobb 500 chick embryos as a material; on the basis of pictures of proepicardium serial sections, with the help of Photoshop CS2, Amira for microscopy 5.0, 3ds max 8.0 computer programs, we made three-dimensional models of proepicardium on 15, 16, 17, 18, 21 stages of development by V.Hamburger, H.Hamilton, 1951. The most important changes in proepicardium shape observed from 14 to 20 stages of development. During this period proepicardium appeared on the right horn of sinus venous, enlarged and formed several crests, which contacted to atrioventricular groove. Coalescence of these crests led to dorsal mesocardium formation. From 21 to 27 stages of development there were no significant changes in proepicardium shape; the area of its contact to sinus venous was grown downwards while the area of contact to the heart enlarged. No vesicles, no “finger-like protrusions” were observed on any stage of bird’s development.
Anna E Scott
Full Text Available Understanding the three-dimensional (3-D micro-architecture of lung tissue can provide insights into the pathology of lung disease. Micro computed tomography (µCT has previously been used to elucidate lung 3D histology and morphometry in fixed samples that have been stained with contrast agents or air inflated and dried. However, non-destructive microstructural 3D imaging of formalin-fixed paraffin embedded (FFPE tissues would facilitate retrospective analysis of extensive tissue archives of lung FFPE lung samples with linked clinical data.FFPE human lung tissue samples (n = 4 were scanned using a Nikon metrology µCT scanner. Semi-automatic techniques were used to segment the 3D structure of airways and blood vessels. Airspace size (mean linear intercept, Lm was measured on µCT images and on matched histological sections from the same FFPE samples imaged by light microscopy to validate µCT imaging.The µCT imaging protocol provided contrast between tissue and paraffin in FFPE samples (15 mm x 7 mm. Resolution (voxel size 6.7 µm in the reconstructed images was sufficient for semi-automatic image segmentation of airways and blood vessels as well as quantitative airspace analysis. The scans were also used to scout for regions of interest, enabling time-efficient preparation of conventional histological sections. The Lm measurements from µCT images were not significantly different to those from matched histological sections.We demonstrated how non-destructive imaging of routinely prepared FFPE samples by laboratory µCT can be used to visualize and assess the 3D morphology of the lung including by morphometric analysis.
Cevidanes, Lucia H. S.; Bailey, L'Tanya J.; Tucker, Scott F.; Styner, Martin A.; Mol, Andre; Phillips, Ceib L.; Proffit, William R.; Turvey, Timothy
2013-01-01
Introduction The purpose of this study was to assess alterations in the 3-dimensional (3D) position of the mandibular rami and condyles in patients receiving either maxillary advancement and mandibular setback or maxillary surgery only. Methods High-resolution cone-beam computed tomography scans were taken of 21 patients before and after orthognathic surgery. Ten patients with various malocclusions underwent maxillary surgery only, and 11 Class III patients received maxillary advancement and mandibular setback. Presurgery and postsurgery 3D models were registered on the surface of the cranial base. A new tool was used for graphical overlay and 3D display with color maps to visually assess the locations and to quantify positional changes in the posterior border of the mandibular rami and condyles between superimposed models. Results The average displacements in condylar position were small—0.77 mm (SD, 0.12 mm) and 0.70 mm (SD, 0.08 mm)—for 2-jaw and 1-jaw surgeries, respectively (not significant, P >.05). All 2-jaw surgery patients had backward rotational displacements of the mandibular rami (mean, 1.98 mm; SD, 1.03 mm), with a maximum surface distance change of ≥2 mm in 8 of 11 subjects. For the 1-jaw surgery, all subjects had small backward rotational displacements of the mandibular rami (mean, 0.78 mm; SD, 0.25 mm), with only 1 subject having a maximum surface distance change ≥2 mm. The difference in mean backward rotational displacement was statistically significant (P <.01). Conclusions The visualization of 3D model superimposition clearly identified the location, magnitude, and direction of mandibular displacement. The 3D imaging allowed quantification of vertical, transverse, and anteroposterior ramus displacement that accompanied mandibular, but not maxillary only, surgery. PMID:17208105
Development of a percentile based three-dimensional model of the buttocks in computer system
Wang, Lijing; He, Xueli; Li, Hongpeng
2016-05-01
There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model similar to human buttocks. The 3D buttock percentile model is highly desired in the ergonomics design and evaluation for these products. So far, there is no research on the percentile sizing system of human 3D buttock model. So the purpose of this paper is to develop a new method for building three-dimensional buttock percentile model in computer system. After scanning the 3D shape of buttocks, the cloud data of 3D points is imported into the reverse engineering software (Geomagic) for the reconstructing of the buttock surface model. Five characteristic dimensions of the buttock are measured through mark-points after models being imported into engineering software CATIA. A series of space points are obtained by the intersecting of the cutting slices and 3D buttock surface model, and then are ordered based on the sequence number of the horizontal and vertical slices. The 1st, 5th, 50th, 95th, 99th percentile values of the five dimensions and the spatial coordinate values of the space points are obtained, and used to reconstruct percentile buttock models. This research proposes a establishing method of percentile sizing system of buttock 3D model based on the percentile values of the ischial tuberosities diameter, the distances from margin to ischial tuberosity and the space coordinates value of coordinate points, for establishing the Nth percentile 3D buttock model and every special buttock types model. The proposed method also serves as a useful guidance for the other 3D percentile models establishment for other part in human body with characteristic points.
Transfer matrix computation of critical polynomials for two-dimensional Potts models
Lykke Jacobsen, Jesper; Scullard, Christian R.
2013-02-01
In our previous work [1] we have shown that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial PB(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = eK - 1 of PB(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasing the size of B in an appropriate way. In earlier work, PB(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of PB(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible. We present results for the critical polynomial on the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures vc obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain vc(4, 82) = 3.742 489 (4), vc(kagome) = 1.876 459 7 (2), and vc(3, 122) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.
Multiscale simulation of microbe structure and dynamics.
Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V; Cheluvaraja, Srinath C; Ortoleva, Peter J
2011-10-01
A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin.
Duesbury, R T; O'Neil, H F
1996-06-01
The purpose of this study was to determine the effect of practice in manipulating 2- and 3-dimensional (D) wireframe images on a learner's ability to visualize 3-D objects. Practice, either rotational or not, consisted of visualizing 2- and 3-D objects generated by personal computer (PC)-based computer-assisted design software. Results indicated that participants in the rotation treatment group performed significantly better than those in either the nonrotation or control group on measures of spatial ability and 3-D visualization ability. Both treatment groups performed significantly better than the control group on measures of metacognition, effort, and worry. These results support a conclusion that spatial ability can be improved through practice that allows the learner to see the relationship between the 2-D and 3-D features of objects.
Turkheimer, Federico E; Leech, Robert; Expert, Paul; Lord, Louis-David; Vernon, Anthony C
2015-08-01
A variety of anatomical and physiological evidence suggests that the brain performs computations using motifs that are repeated across species, brain areas, and modalities. The computational architecture of cortex, for example, is very similar from one area to another and the types, arrangements, and connections of cortical neurons are highly stereotyped. This supports the idea that each cortical area conducts calculations using similarly structured neuronal modules: what we term canonical computational motifs. In addition, the remarkable self-similarity of the brain observables at the micro-, meso- and macro-scale further suggests that these motifs are repeated at increasing spatial and temporal scales supporting brain activity from primary motor and sensory processing to higher-level behaviour and cognition. Here, we briefly review the biological bases of canonical brain circuits and the role of inhibitory interneurons in these computational elements. We then elucidate how canonical computational motifs can be repeated across spatial and temporal scales to build a multiplexing information system able to encode and transmit information of increasing complexity. We point to the similarities between the patterns of activation observed in primary sensory cortices by use of electrophysiology and those observed in large scale networks measured with fMRI. We then employ the canonical model of brain function to unify seemingly disparate evidence on the pathophysiology of schizophrenia in a single explanatory framework. We hypothesise that such a framework may also be extended to cover multiple brain disorders which are grounded in dysfunction of GABA interneurons and/or these computational motifs.
Multiscale Simulations Using Particles
Walther, Jens Honore
We are developing particle methods as a general framework for large scale simulations of discrete and continuous systems in science and engineering. The specific application and research areas include: discrete element simulations of granular flow, smoothed particle hydrodynamics and particle...... vortex methods for problems in continuum fluid dynamics, dissipative particle dynamics for flow at the meso scale, and atomistic molecular dynamics simulations of nanofluidic systems. We employ multiscale techniques to breach the atomistic and continuum scales to study fundamental problems in fluid...
Palmer, Grant
1989-01-01
This study presents a three-dimensional explicit, finite-difference, shock-capturing numerical algorithm applied to viscous hypersonic flows in thermochemical nonequilibrium. The algorithm employs a two-temperature physical model. Equations governing the finite-rate chemical reactions are fully-coupled to the gas dynamic equations using a novel coupling technique. The new coupling method maintains stability in the explicit, finite-rate formulation while allowing relatively large global time steps. The code uses flux-vector accuracy. Comparisons with experimental data and other numerical computations verify the accuracy of the present method. The code is used to compute the three-dimensional flowfield over the Aeroassist Flight Experiment (AFE) vehicle at one of its trajectory points.
Jyothi, D.; Murty, T.V.R.; Sarma, V.V.; Rao, D.P.
of Marine Sciences Vol. 29, June 2000, pp. 185-187 Short Communication Computation of diffusion coefficients for waters of Gauthami Godavari estuary using one-dimensional advection-diffusion model D Jyothi, T V Ramana Murty, V V Sarma & D P Rao National.... - Jan.) Y2(x) = 8.55283 x + 17.5469 (Jan. - April) These equations would be more useful to get diffusion coefficients for any point along the channel axis, which in turn, helps to compute the concentration of pollutant along the axis of estuary. Thus...
Luzzatto, Stefano; Takahasi, Hiroki
2006-07-01
We formulate and prove a Jakobson-Benedicks-Carleson-type theorem on the occurrence of non-uniform hyperbolicity (stochastic dynamics) in families of one-dimensional maps, based on computable starting conditions and providing explicit, computable, lower bounds for the measure of the set of selected parameters. As a first application of our results we show that the set of parameters corresponding to maps in the quadratic family fa(x) = x2 - a which have an absolutely continuous invariant probability measure is at least 10-5000.
Mhabary, Ziv; Levi, Ofer; Small, Eran; Stern, Adrian
2016-07-01
This paper presents an efficient method for computing a stack of images digitally focused at various lengths from a four-dimensional light field (LF). The main contribution of this work is a fast and algebraically exact method that does not require interpolation in the frequency or spatial domains as alternative methods do. The proposed imaging operator combines two-dimensional (2-D) fast Fourier transform with 2-D fractional Fourier transform and has computational complexity of O(N log N), where N is the number of pixels in the LF tesseract of dimension N=nx×ny×nu×nv. The whole method consists of unitary vector-based operations; therefore, parallel implementation is easy and can contribute additional speed up. While current state of the art methods suffer from inherent tradeoff between the reconstruction quality and computational complexity, the proposed method benefits of both low-computational complexity and high-reconstruction quality. We also offer a solution for refocusing at distances that are not included in the reconstructed images stack. For such a case, we provide a modified version of our method, which is also algebraically exact and has lower computational complexity than other exact methods.
Vaquerizo, Beatriz; Theriault-Lauzier, Pascal; Piazza, Nicolo
2015-12-01
Mitral regurgitation is the most prevalent valvular heart disease worldwide. Despite the widespread availability of curative surgical intervention, a considerable proportion of patients with severe mitral regurgitation are not referred for treatment, largely due to the presence of left ventricular dysfunction, advanced age, and comorbid illnesses. Transcatheter mitral valve replacement is a promising therapeutic alternative to traditional surgical valve replacement. The complex anatomical and pathophysiological nature of the mitral valvular complex, however, presents significant challenges to the successful design and implementation of novel transcatheter mitral replacement devices. Patient-specific 3-dimensional computer-based models enable accurate assessment of the mitral valve anatomy and preprocedural simulations for transcatheter therapies. Such information may help refine the design features of novel transcatheter mitral devices and enhance procedural planning. Herein, we describe a novel medical image-based processing tool that facilitates accurate, noninvasive assessment of the mitral valvular complex, by creating precise three-dimensional heart models. The 3-dimensional computer reconstructions are then converted to a physical model using 3-dimensional printing technology, thereby enabling patient-specific assessment of the interaction between device and patient. It may provide new opportunities for a better understanding of the mitral anatomy-pathophysiology-device interaction, which is of critical importance for the advancement of transcatheter mitral valve replacement.
Wieringa, Fokko P; Bouma, Henri; Eendebak, Pieter T; van Basten, Jean-Paul A; Beerlage, Harrie P; Smits, Geert A H J; Bos, Jelte E
2014-04-01
In comparison to open surgery, endoscopic surgery offers impaired depth perception and narrower field-of-view. To improve depth perception, the Da Vinci robot offers three-dimensional (3-D) video on the console for the surgeon but not for assistants, although both must collaborate. We improved the shared perception of the whole surgical team by connecting live 3-D monitors to all three available Da Vinci generations, probed user experience after two years by questionnaire, and compared time measurements of a predefined complex interaction task performed with a 3-D monitor versus two-dimensional. Additionally, we investigated whether the complex mental task of reconstructing a 3-D overview from an endoscopic video can be performed by a computer and shared among users. During the study, 925 robot-assisted laparoscopic procedures were performed in three hospitals, including prostatectomies, cystectomies, and nephrectomies. Thirty-one users participated in our questionnaire. Eighty-four percent preferred 3-D monitors and 100% reported spatial-perception improvement. All participating urologists indicated quicker performance of tasks requiring delicate collaboration (e.g., clip placement) when assistants used 3-D monitors. Eighteen users participated in a timing experiment during a delicate cooperation task in vitro. Teamwork was significantly (40%) faster with the 3-D monitor. Computer-generated 3-D reconstructions from recordings offered very wide interactive panoramas with educational value, although the present embodiment is vulnerable to movement artifacts.
MULTI2D - a computer code for two-dimensional radiation hydrodynamics
Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.
2009-06-01
required. Nature of problem: In inertial confinement fusion and related experiments with lasers and particle beams, energy transport by thermal radiation becomes important. Under these conditions, the radiation field strongly interacts with the hydrodynamic motion through emission and absorption processes. Solution method: The equations of radiation transfer coupled with Lagrangian hydrodynamics, heat diffusion and beam tracing (laser or ions) are solved, in two-dimensional axial-symmetric geometry ( R-Z coordinates) using a fractional step scheme. Radiation transfer is solved with angular resolution. Matter properties are either interpolated from tables (equations-of-state and opacities) or computed by user routines (conductivities and beam attenuation). Restrictions: The code has been designed for typical conditions prevailing in inertial confinement fusion (ns time scale, matter states close to local thermodynamical equilibrium, negligible radiation pressure, …). Although a wider range of situations can be treated, extrapolations to regions beyond this design range need special care. Unusual features: A special computer language, called r94, is used at top levels of the code. These parts have to be converted to standard C by a translation program (supplied as part of the package). Due to the complexity of code (hydro-code, grid generation, user interface, graphic post-processor, translator program, installation scripts) extensive manuals are supplied as part of the package. Running time: 567 seconds for the example supplied.
Multiscale Analysis of Information Dynamics for Linear Multivariate Processes
Faes, Luca; Stramaglia, Sebastiano; Nollo, Giandomenico; Stramaglia, Sebastiano
2016-01-01
In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using state-space (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale infor...
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Ramamurti, Ravi; Sandberg, William C; Löhner, Rainald; Walker, Jeffrey A; Westneat, Mark W
2002-10-01
Many fishes that swim with the paired pectoral fins use fin-stroke parameters that produce thrust force from lift in a mechanism of underwater flight. These locomotor mechanisms are of interest to behavioral biologists, biomechanics researchers and engineers. In the present study, we performed the first three-dimensional unsteady computations of fish swimming with oscillating and deforming fins. The objective of these computations was to investigate the fluid dynamics of force production associated with the flapping aquatic flight of the bird wrasse Gomphosus varius. For this computational work, we used the geometry of the wrasse and its pectoral fin, and previously measured fin kinematics, as the starting points for computational investigation of three-dimensional (3-D) unsteady fluid dynamics. We performed a 3-D steady computation and a complete set of 3-D quasisteady computations for a range of pectoral fin positions and surface velocities. An unstructured, grid-based, unsteady Navier-Stokes solver with automatic adaptive remeshing was then used to compute the unsteady flow about the wrasse through several complete cycles of pectoral fin oscillation. The shape deformation of the pectoral fin throughout the oscillation was taken from the experimental kinematics. The pressure distribution on the body of the bird wrasse and its pectoral fins was computed and integrated to give body and fin forces which were decomposed into lift and thrust. The velocity field variation on the surface of the wrasse body, on the pectoral fins and in the near-wake was computed throughout the swimming cycle. We compared our computational results for the steady, quasi-steady and unsteady cases with the experimental data on axial and vertical acceleration obtained from the pectoral fin kinematics experiments. These comparisons show that steady state computations are incapable of describing the fluid dynamics of flapping fins. Quasi-steady state computations, with correct incorporation of
Park, Young Seok; Kim, Sung Tae; Oh, Seung Hee; Park, Hee Jung; Lee, Sophia; Kim, Taeil; Lee, Young Kyu; Heo, Min Suk [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)
2014-06-15
This study evaluated the efficacy of alveolar ridge preservation methods with and without primary wound closure and the relationship between histometric and micro-computed tomographic (CT) data. Porcine hydroxyapatite with polytetrafluoroethylene membrane was implanted into a canine extraction socket. The density of the total mineralized tissue, remaining hydroxyapatite, and new bone was analyzed by histometry and micro-CT. The statistical association between these methods was evaluated. Histometry and micro-CT showed that the group which underwent alveolar preservation without primary wound closure had significantly higher new bone density than the group with primary wound closure (P<0.05). However, there was no significant association between the data from histometry and micro-CT analysis. These results suggest that alveolar ridge preservation without primary wound closure enhanced new bone formation more effectively than that with primary wound closure. Further investigation is needed with respect to the comparison of histometry and micro-CT analysis.
Renaud, Earl W.; Tan, Choon S.
1991-01-01
The three dimensional viscous flow through a planar turbine cascade is numerically simulated by direct solution of the incompressible Navier-Stokes equations. Flow dependence in the spanwise direction is represented by direct expansion in Chebyshev polynomials, while the discretization on planes parallel to the endwalls is accomplished using the spectral element method. Elemental mapping from the physical to the computational space uses an algebraic mapping technique. A fractional time stepping method that consists of an explicit nonlinear convective step, an implicit pressure correction step, and an implicit viscous step is used to advance the Navier-Stokes equations forward in time. Results computed at moderate Reynolds numbers show a three dimensional endwall flow separation, a midspan separation of the blade suction surface boundary layer, and other three-dimensional features such as the presence of a saddle point flow in the endwall region. In addition, the computed skin friction lines are shown to be orthogonal to the surface vorticity lines, demonstrating the accuracy achievable in the present method.
Kevrekidis, Ioannis G. [Princeton Univ., NJ (United States)
2017-02-01
The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.
杨上供; 甘亮勤; 熊飞兵
2014-01-01
提出了一种医学层析图像的计算全息三维可视化技术。首先研究了层析图像序列的三维信息融合，将层析图像序列的二维信息融合成三维信息，用计算全息图的方法进行三维信息记录。然后结合空间光调制器的结构特性，对全息系统的空间频率、参物光夹角、取样间隔以及全息再现像的再现区域和视角等进行了讨论和分析，并设置了相关参数，使计算全息系统与电子显示系统相匹配。最后用液晶空间光调制器作为全息图显示载体，用计算机控制全息图的实时输出，用雾屏承载三维空间再现像，建立三维图像光电再现与实时显示系统，实现层析图像序列的三维可视化，给出了理论分析与实验结果。%A new technology of three-dimension visualization of computer tomograms using computer generated holography is proposed. Firstly, information amalgamation of computer tomograms is studied, the three-dimension information extracted from a series of two-dimension computer tomograms is recorded by computer-generated holography, and different Computer-generated Holograms(CGHs) corresponding to different number of computer tomograms are obtained. Then, combining the structural features of Liquid Crystal Spatial Light Modulator (LC-SLM), spatial frequency, angle between object and reference beams as well as sample interval of CGHs is analyzed, and the viewing angle of reconstructed image of CGHs is discussed. On this basis, relevant parameters are properly set making the holographic system and the liquid crystal display system match well. Finally, three-dimensional reconstructed and real-time display system for computer tomograms is build, in which, real-time controlled by computer, different CGHs are output to LC-SLM in sequence, and three-dimensional reconstructed images of computer tomograms changing with the holograms are dynamically displayed on fog screen. The principle and experiment
Conformal-Based Surface Morphing and Multi-Scale Representation
Ka Chun Lam
2014-05-01
Full Text Available This paper presents two algorithms, based on conformal geometry, for the multi-scale representations of geometric shapes and surface morphing. A multi-scale surface representation aims to describe a 3D shape at different levels of geometric detail, which allows analyzing or editing surfaces at the global or local scales effectively. Surface morphing refers to the process of interpolating between two geometric shapes, which has been widely applied to estimate or analyze deformations in computer graphics, computer vision and medical imaging. In this work, we propose two geometric models for surface morphing and multi-scale representation for 3D surfaces. The basic idea is to represent a 3D surface by its mean curvature function, H, and conformal factor function λ, which uniquely determine the geometry of the surface according to Riemann surface theory. Once we have the (λ, H parameterization of the surface, post-processing of the surface can be done directly on the conformal parameter domain. In particular, the problem of multi-scale representations of shapes can be reduced to the signal filtering on the λ and H parameters. On the other hand, the surface morphing problem can be transformed to an interpolation process of two sets of (λ, H parameters. We test the proposed algorithms on 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed methods can effectively obtain multi-scale surface representations and give natural surface morphing results.
Hierarchical discriminant manifold learning for dimensionality reduction and image classification
Chen, Weihai; Zhao, Changchen; Ding, Kai; Wu, Xingming; Chen, Peter C. Y.
2015-09-01
In the field of image classification, it has been a trend that in order to deliver a reliable classification performance, the feature extraction model becomes increasingly more complicated, leading to a high dimensionality of image representations. This, in turn, demands greater computation resources for image classification. Thus, it is desirable to apply dimensionality reduction (DR) methods for image classification. It is necessary to apply DR methods to relieve the computational burden as well as to improve the classification accuracy. However, traditional DR methods are not compatible with modern feature extraction methods. A framework that combines manifold learning based DR and feature extraction in a deeper way for image classification is proposed. A multiscale cell representation is extracted from the spatial pyramid to satisfy the locality constraints for a manifold learning method. A spectral weighted mean filtering is proposed to eliminate noise in the feature space. A hierarchical discriminant manifold learning is proposed which incorporates both category label and image scale information to guide the DR process. Finally, the image representation is generated by concatenating dimensionality reduced cell representations from the same image. Extensive experiments are conducted to test the proposed algorithm on both scene and object recognition datasets in comparison with several well-established and state-of-the-art methods with respect to classification precision and computational time. The results verify the effectiveness of incorporating manifold learning in the feature extraction procedure and imply that the multiscale cell representations may be distributed on a manifold.
Tian, Bing, E-mail: bing.tian@hotmail.com; Xu, Bing, E-mail: aishanli0102@126.com; Lu, Jianping, E-mail: tianbing2003@163.com; Liu, Qi, E-mail: liuqimd@126.com; Wang, Li, E-mail: wangli_changhai@163.com; Wang, Minjie, E-mail: cjr.wangminjie@vip.163.com
2015-06-15
Highlights: • 4D CTA showed excellent agreement with DSA with regard to identification of feeding arteries and drainage veins. • The most important finding was 4D CTA in determining the impact of DAVF treatment with transarterial embolization. • 4D CTA provides images similar to those obtained with DSA both before and after treatment. - Abstract: Purpose: This study aimed to evaluate the usefulness of four-dimensional CTA before and after embolization treatment with ONYX-18 in eleven patients with cranial dural arteriovenous fistulas, and to compare the results with those of the reference standard DSA. Patients and Methods: Eleven patients with cranial dural arteriovenous fistulas detected on DSA underwent transarterial embolization with ONYX-18. Four-dimensional CTA was performed an average of 2 days before and 4 days after DSA. Four-dimensional CTA and DSA images were reviewed by two neuroradiologists for identification of feeding arteries and drainage veins and for determining treatment effects. Interobserver and intermodality agreement between four-dimensional CTA and DSA were assessed. Results: Forty-two feeding arteries were identified for 14 fistulas in the 11 patients. Of these, 36 (85.71%) were detected on four-dimensional CTA. After transarterial embolization, one patient got partly embolized, and the fistulas in the remaining 10 patients were completely occluded. The interobserver agreement for four-dimensional CTA and intermodality agreement between four-dimensional CTA and DSA were excellent (κ = 1) for shunt location, identification of drainage veins, and fistula occlusion after treatment. Conclusion: Four-dimensional CTA images are highly accurate when compared with DSA images both before and after transarterial embolization treatment. Four-dimensional CTA can be used for diagnosis as well as follow-up of cranial dural arteriovenous fistulas in clinical settings.
Modeling Pancreatic Tumor Motion Using 4-Dimensional Computed Tomography and Surrogate Markers
Huguet, Florence [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Department of Radiation Oncology, Hôpitaux Universitaires Paris Est, Hôpital Tenon, University Paris VI, Paris (France); Yorke, Ellen D.; Davidson, Margaret [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Zhang, Zhigang [Department of Biostatistics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Jackson, Andrew; Mageras, Gig S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Wu, Abraham J. [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Goodman, Karyn A., E-mail: GoodmanK@mskcc.org [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States)
2015-03-01
Purpose: To assess intrafractional positional variations of pancreatic tumors using 4-dimensional computed tomography (4D-CT), their impact on gross tumor volume (GTV) coverage, the reliability of biliary stent, fiducial seeds, and the real-time position management (RPM) external marker as tumor surrogates for setup of respiratory gated treatment, and to build a correlative model of tumor motion. Methods and Materials: We analyzed the respiration-correlated 4D-CT images acquired during simulation of 36 patients with either a biliary stent (n=16) or implanted fiducials (n=20) who were treated with RPM respiratory gated intensity modulated radiation therapy for locally advanced pancreatic cancer. Respiratory displacement relative to end-exhalation was measured for the GTV, the biliary stent, or fiducial seeds, and the RPM marker. The results were compared between the full respiratory cycle and the gating interval. Linear mixed model was used to assess the correlation of GTV motion with the potential surrogate markers. Results: The average ± SD GTV excursions were 0.3 ± 0.2 cm in the left-right direction, 0.6 ± 0.3 cm in the anterior-posterior direction, and 1.3 ± 0.7 cm in the superior-inferior direction. Gating around end-exhalation reduced GTV motion by 46% to 60%. D95% was at least the prescribed 56 Gy in 76% of patients. GTV displacement was associated with the RPM marker, the biliary stent, and the fiducial seeds. The correlation was better with fiducial seeds and with biliary stent. Conclusions: Respiratory gating reduced the margin necessary for radiation therapy for pancreatic tumors. GTV motion was well correlated with biliary stent or fiducial seed displacements, validating their use as surrogates for daily assessment of GTV position during treatment. A patient-specific internal target volume based on 4D-CT is recommended both for gated and not-gated treatment; otherwise, our model can be used to predict the degree of GTV motion.
Multiscale GasKinetics/Particle (MGP) Simulation for Rocket Plume/Lunar Dust Interactions Project
National Aeronautics and Space Administration — A Multiscale GasKinetic/Particle (MGP) computational method is proposed to simulate the plume-crater-interaction/dust-impingement(PCIDI) problem. The MGP method...
An efficient numerical integral in three-dimensional electromagnetic field computations
Whetten, Frank L.; Liu, Kefeng; Balanis, Constantine A.
1990-01-01
An improved algorithm for efficiently computing a sinusoid and an exponential integral commonly encountered in method-of-moments solutions is presented. The new algorithm has been tested for accuracy and computer execution time against both numerical integration and other existing numerical algorithms, and has outperformed them. Typical execution time comparisons on several computers are given.
COMPUTER SIMULATION OF 3-DIMENSIONAL DYNAMIC ASSEMBLY PROCESS OF MECHANICAL ROTATIONAL BODY
1998-01-01
Focusing on the study of the components of mechanical rotational body,the data structure and algorithm of component model generation are discussed.Some problems in assembly process of 3-dimensional graph of components are studied in great detail.
Matas Richard
2012-04-01
Full Text Available The article deals with comparison of drag and lift coefficients for simple two-dimensional objects, which are often discussed in fluid mechanics fundamentals books. The commercial CFD software ANSYS/FLUENT 13 was used for computation of flow fields around the objects and determination of the drag and lift coefficients. The flow fields of the two-dimensional objects were computed for velocity up to 160 km per hour and Reynolds number Re = 420 000. Main purpose was to verify the suggested computational domain and model settings for further more complex objects geometries. The more complex profiles are used to stabilize asymmetrical ('z'-shaped pantographs of high-speed trains. The trains are used in two-way traffic where the pantographs have to operate with the same characteristics in both directions. Results of the CFD computations show oscillation of the drag and lift coefficients over time. The results are compared with theoretical and experimental data and discussed. Some examples are presented in the paper.
Multiscale Signal Analysis and Modeling
Zayed, Ahmed
2013-01-01
Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...
Three-dimensional imaging in periodontal diagnosis – Utilization of cone beam computed tomography
Mohan, Ranjana; Singh, Archana; Gundappa, Mohan
2011-01-01
In the field of periodontology and implantology, assessment of the condition of teeth and surrounding alveolar bone depends largely on two-dimensional imaging modalities such as conventional and digital radiography. Though these modalities are very useful and have less radiation exposure, they still cannot determine a three-dimensional (3D) architecture of osseous defects. Hence, an imaging modality which would give an undistorted 3D vision of a tooth and surrounding structures is essential t...
Multiscale simulation of DC corona discharge and ozone generation from nanostructures
Wang, Pengxiang
simulation of corona discharges from nanostructures, a one-dimensional (1-D) multiscale model is used due to the prohibitive computational expense associated with two-dimensional (2-D) modeling. Near the nanoscale discharge electrode surface, a kinetic model based on PIC-MCC is used due to a relatively large Knudsen number in this region. Far away from the nanoscale discharge electrode, a continuum model is used since the Knudsen number is very small there. The multiscale modeling results are compared with experimental data. The quantitative agreement in positive discharges and qualitative agreement in negative discharges validate the modeling approach. The mechanism of sustaining the discharge process from nanostructures is revealed and is found to be different from that of discharge from micro- or macro-sized electrodes. Finally, the corona plasma model is combined with a plasma chemistry model and a transport model to predict the ozone production from the nanoscale corona. The dependence of ozone production on the applied potential and air velocity is studied. The electric field distribution in a 2-D multiscale domain (from nanoscale to microscale) is predicted by solving the Poisson's equation using a finite difference scheme. The discretized linear equations are solved using a multigrid method under the framework of PETSc on a paralleled supercomputer. Although the Poisson solver is able to resolve the multiscale field, the prohibitively long computation time limits the use of a 2-D solver in the current PIC-MCC scheme.
Concurrent multiscale modeling of amorphous materials
Tan, Vincent
2013-03-01
An approach to multiscale modeling of amorphous materials is presented whereby atomistic scale domains coexist with continuum-like domains. The atomistic domains faithfully predict severe deformation while the continuum domains allow the computation to scale up the size of the model without incurring excessive computational costs associated with fully atomistic models and without the introduction of spurious forces across the boundary of atomistic and continuum-like domains. The material domain is firstly constructed as a tessellation of Amorphous Cells (AC). For regions of small deformation, the number of degrees of freedom is then reduced by computing the displacements of only the vertices of the ACs instead of the atoms within. This is achieved by determining, a priori, the atomistic displacements within such Pseudo Amorphous Cells associated with orthogonal deformation modes of the cell. Simulations of nanoscale polymer tribology using full molecular mechanics computation and our multiscale approach give almost identical prediction of indentation force and the strain contours of the polymer. We further demonstrate the capability of performing adaptive simulations during which domains that were discretized into cells revert to full atomistic domains when their strain attain a predetermined threshold. The authors would like to acknowledge the financial support given to this study by the Agency of Science, Technology and Research (ASTAR), Singapore (SERC Grant No. 092 137 0013).
Xavier, M. P.; do Nascimento, T. M.; dos Santos, R. W.; Lobosco, M.
2014-03-01
The development of computational systems that mimics the physiological response of organs or even the entire body is a complex task. One of the issues that makes this task extremely complex is the huge computational resources needed to execute the simulations. For this reason, the use of parallel computing is mandatory. In this work, we focus on the simulation of temporal and spatial behaviour of some human innate immune system cells and molecules in a small three-dimensional section of a tissue. To perform this simulation, we use multiple Graphics Processing Units (GPUs) in a shared-memory environment. Despite of high initialization and communication costs imposed by the use of GPUs, the techniques used to implement the HIS simulator have shown to be very effective to achieve this purpose.
Dolean Victorita
2014-07-01
Full Text Available Multiphase, compositional porous media flow models lead to the solution of highly heterogeneous systems of Partial Differential Equations (PDE. We focus on overlapping Schwarz type methods on parallel computers and on multiscale methods. We present a coarse space [Nataf F., Xiang H., Dolean V., Spillane N. (2011 SIAM J. Sci. Comput. 33, 4, 1623-1642] that is robust even when there are such heterogeneities. The two-level domain decomposition approach is compared to multiscale methods.
Multi-Scale Pattern Recognition for Image Classification and Segmentation
Li, Y.
2013-01-01
Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basi
Lin, Hsiu-Hsia; Lo, Lun-Jou
2015-04-01
By incorporating three-dimensional (3D) imaging and computer-aided design and manufacturing techniques, 3D computer-assisted technology has been applied widely to provide accurate guidance for assessment and treatment planning in clinical practice. This technology has recently been used in orthognathic surgery to improve surgical planning and outcome. The modality will gradually become popular. This study reviewed the literature concerning the use of computer-assisted techniques in orthognathic surgery including surgical planning, simulation, intraoperative translation of the virtual surgery, and postoperative evaluation. A Medline, PubMed, ProQuest, and ScienceDirect search was performed to find relevant articles with regard to 3D computer-assisted orthognathic surgery in the past 10 years. A total of 460 articles were revealed, out of which 174 were publications addressed the topic of this study. The purpose of this article is to present an overview of the state-of-art methods for 3D computer-assisted technology in orthognathic surgery. From the review we can conclude that the use of computer-assisted technique in orthognathic surgery provides the benefit of optimal functional and aesthetic results, patient satisfaction, precise translation of the treatment plan, and facilitating intraoperative manipulation.
Chien, T.H.; Domanus, H.M.; Sha, W.T.
1993-02-01
The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.
Chien, T.H.; Domanus, H.M.; Sha, W.T.
1993-02-01
The COMMIX-PPC computer program is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional conservation of mass. momentum, and energy equations on the tube side, and the proper accounting for the thermal interaction between shell and tube side through the porous medium approach. The other added feature is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient three-dimensional analysis of fluid flow with heat transfer in a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification. it can be used to analyze processes in any heat exchanger or other single-phase engineering applications.
Thang, Ho Viet; Rubeš, Miroslav; Bludský, Ota; Nachtigall, Petr
2014-09-04
The adsorption and catalytic properties of three-dimensional zeolite UTL were investigated computationally along with properties of its two-dimensional analogue IPC-1P that can be obtained from UTL by a removal of D4R units. Adsorption properties and Lewis acidity of extra-framework Li(+) sites were investigated for both two- and three-dimensional forms of UTL using the carbon monoxide as a probe molecule. The CO adsorption enthalpies, calculated with various dispersion-corrected DFT methods, including DFT/CC, DFT-D2, and vdW-DF2, and the CO stretching frequencies obtained with the νCO/rCO correlation method are compared for corresponding Li(+) sites in 3D and 2D UTL zeolite. For the majority of framework Al positions the Li(+) cation is preferably located in one of the channel wall sites and such sites remains unchanged upon the 3D → 2D UTL transformation; consequently, the adsorption enthalpies become only slightly smaller in 2D UTL (less than 3 kJ mol(-1)) due to the missing part of dispersion interactions and νCO becomes also only up to 5 cm(-1) smaller in 2D UTL due to the missing repulsion with framework oxygen atoms from the opposite site of the zeolite channel (effect from the top). However, when Li(+) is located in the intersection site in 3D UTL (about 20% probability), its coordination with the framework is significantly increased in 2D UTL and that is accompanied by significant decrease of both νCO (about 20 cm(-1)) and adsorption enthalpy (about 20 kJ mol(-1)). Because the intersection sites in 3D UTL are the most active adsorption and catalytic Lewis sites, the results reported herein suggest that the 3D → 2D transformation of UTL zeolite is connected with partial decrease of zeolite activity in processes driven by Lewis acid sites.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
Plechac, Petr [Univ. of Delaware, Newark, DE (United States). Dept. of Mathematical Sciences
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
Multiscale analysis and nonlinear dynamics from genes to the brain
Schuster, Heinz Georg
2013-01-01
Since modeling multiscale phenomena in systems biology and neuroscience is a highly interdisciplinary task, the editor of the book invited experts in bio-engineering, chemistry, cardiology, neuroscience, computer science, and applied mathematics, to provide their perspectives. Each chapter is a window into the current state of the art in the areas of research discussed and the book is intended for advanced researchers interested in recent developments in these fields. While multiscale analysis is the major integrating theme of the book, its subtitle does not call for bridging the scales from g
Jiang, Lijian
2010-08-01
In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.
The cohomological reduction method for computing n-dimensional cocyclic matrices
Álvarez, Víctor; Frau, María-Dolores; Real, Pedro
2012-01-01
Provided that a cohomological model for G is known, we describe a method for constructing a basis for n-cocycles over G, from which the whole set of n-dimensional cocyclic matrices over G may be straightforwardly calculated. Focusing in the case n=2 (which is of special interest, e.g. for looking for cocyclic Hadamard matrices), our method provides a basis for 2-cocycles in such a way that representative 2-cocycles are calculated all at once, so that there is no need to distinguish between inflation and transgression 2-cocycles (as it has traditionally been the case until now). When n>2, this method provide an uniform way of looking for higher dimensional cocyclic Hadamard matrices for the first time. We illustrate the method with some examples, for n=2,3. In particular, we give some examples of improper 3-dimensional cocyclic Hadamard matrices.
Entropic Approach to Multiscale Clustering Analysis
Antonio Insolia
2012-05-01
Full Text Available Recently, a novel method has been introduced to estimate the statistical significance of clustering in the direction distribution of objects. The method involves a multiscale procedure, based on the Kullback–Leibler divergence and the Gumbel statistics of extreme values, providing high discrimination power, even in presence of strong background isotropic contamination. It is shown that the method is: (i semi-analytical, drastically reducing computation time; (ii very sensitive to small, medium and large scale clustering; (iii not biased against the null hypothesis. Applications to the physics of ultra-high energy cosmic rays, as a cosmological probe, are presented and discussed.
Ali Dashti
Full Text Available This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU. The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
Fernandez-Prada, Sara; Delgado-Sanchez, Elsa; De Santiago, Javier; Zapardiel, Ignacio
2015-01-01
In endometrial cancer, the histopathological analysis of the lymphatic nodes is essential to establish a correct prognosis and tailored adjuvant treatment. It is well-known that patients with early-stage endometrial cancer have a low incidence of nodal disease. In this group, systematic lymphadenectomy is not recommended. To improve the detection rate of sentinel nodes in clinical practice, new techniques are emerging like real-time 3-dimensional single-photon emission computed tomographic (SPECT) imaging. We report our experience using this innovative technique for intraoperative detection of sentinel nodes in endometrial cancer. The real-time 3-dimensional SPECT sentinel node biopsy seems to be feasible and accurate in endometrial cancer although further studies are needed to set the precision and predictive values compared with the current differed SPECT techniques and blue dye techniques. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.
WEN Xiao-Yong; MENG Xiang-Hua
2013-01-01
In this paper,the (2+1)-dimensional generalization of shallow water wave equation,which may be used to describe the propagation of ocean waves,is analytically investigated.With the aid of symbolic computation,we prove that the (2+1)-dimensional generalization of shallow water wave equation possesses the Painlevé property under a certain condition,and its Lax pair is constructed by applying the singular manifold method.Based on the obtained Lax representation,the Darboux transformation (DT) is constructed.The first iterated solution,second iterated solution and a special N-soliton solution with an arbitrary function are derived with the resulting DT.Relevant properties are graphically illustrated,which might be helpful to understanding the propagation processes for ocean waves in shallow water.
Hase, Kazunori; Yokoi, Takashi
In the present study, the computer simulation technique to autonomously generate running motion from walking was developed using a three-dimensional entire-body neuro-musculo-skeletal model. When maximizing locomotive speed was employed as the evaluative criterion, the initial walking pattern could not transition to a valid running motion. When minimizing the period of foot-ground contact was added to this evaluative criterion, the simulation model autonomously produced appropriate three-dimensional running. Changes in the neuronal system showed the fatigue coefficient of the neural oscillators to reduce as locomotion patterns transitioned from walking to running. Then, when the running speed increased, the amplitude of the non-specific stimulus from the higher center increased. These two changes indicate mean that the improvement in responsiveness of the neuronal system is important for the transition process from walking to running, and that the comprehensive activation level of the neuronal system is essential in the process of increasing running speed.
Ling, C; Connor, K A; Demers, D R; Radke, R J; Schoch, P M
2007-11-01
A magnetic field mapping technique via heavy ion beam trajectory imaging is being developed on the Madison Symmetric Torus reversed field pinch. This paper describes the computational tools created to model camera images of the light emitted from a simulated ion beam, reconstruct a three-dimensional trajectory, and estimate the accuracy of the reconstruction. First, a computer model is used to create images of the torus interior from any candidate camera location. It is used to explore the visual field of the camera and thus to guide camera parameters and placement. Second, it is shown that a three-dimensional ion beam trajectory can be recovered from a pair of perspectively projected trajectory images. The reconstruction considers effects due to finite beam size, nonuniform beam current density, and image background noise. Third, it is demonstrated that the trajectory reconstructed from camera images can help compute magnetic field profiles, and might be used as an additional constraint to an equilibrium reconstruction code, such as MSTFit.
Hassan, B.; van der Stelt, P.; Sanderink, G.
2009-01-01
The aims of this study were to assess the accuracy of linear measurements on three-dimensional (3D) surface-rendered images generated from cone beam computed tomography (CBCT) in comparison with two-dimensional (2D) slices and 2D lateral and postero-anterior (PA) cephalometric projections, and to in
MULTISCALE PHENOMENA IN MATERIALS
A. BISHOP
2000-09-01
This project developed and supported a technology base in nonequilibrium phenomena underpinning fundamental issues in condensed matter and materials science, and applied this technology to selected problems. In this way the increasingly sophisticated synthesis and characterization available for classes of complex electronic and structural materials provided a testbed for nonlinear science, while nonlinear and nonequilibrium techniques helped advance our understanding of the scientific principles underlying the control of material microstructure, their evolution, fundamental to macroscopic functionalities. The project focused on overlapping areas of emerging thrusts and programs in the Los Alamos materials community for which nonlinear and nonequilibrium approaches will have decisive roles and where productive teamwork among elements of modeling, simulations, synthesis, characterization and applications could be anticipated--particularly multiscale and nonequilibrium phenomena, and complex matter in and between fields of soft, hard and biomimetic materials. Principal topics were: (i) Complex organic and inorganic electronic materials, including hard, soft and biomimetic materials, self-assembly processes and photophysics; (ii) Microstructure and evolution in multiscale and hierarchical materials, including dynamic fracture and friction, dislocation and large-scale deformation, metastability, and inhomogeneity; and (iii) Equilibrium and nonequilibrium phases and phase transformations, emphasizing competing interactions, frustration, landscapes, glassy and stochastic dynamics, and energy focusing.
Generalized multiscale finite element method. Symmetric interior penalty coupling
Efendiev, Yalchin R.
2013-12-01
Motivated by applications to numerical simulations of flows in highly heterogeneous porous media, we develop multiscale finite element methods for second order elliptic equations. We discuss a multiscale model reduction technique in the framework of the discontinuous Galerkin finite element method. We propose two different finite element spaces on the coarse mesh. The first space is based on a local eigenvalue problem that uses an interior weighted L2-norm and a boundary weighted L2-norm for computing the "mass" matrix. The second choice is based on generation of a snapshot space and subsequent selection of a subspace of a reduced dimension. The approximation with these multiscale spaces is based on the discontinuous Galerkin finite element method framework. We investigate the stability and derive error estimates for the methods and further experimentally study their performance on a representative number of numerical examples. © 2013 Elsevier Inc.
Multi-scale modelling and simulation in systems biology.
Dada, Joseph O; Mendes, Pedro
2011-02-01
The aim of systems biology is to describe and understand biology at a global scale where biological functions are recognised as a result of complex mechanisms that happen at several scales, from the molecular to the ecosystem. Modelling and simulation are computational tools that are invaluable for description, prediction and understanding these mechanisms in a quantitative and integrative way. Therefore the study of biological functions is greatly aided by multi-scale methods that enable the coupling and simulation of models spanning several spatial and temporal scales. Various methods have been developed for solving multi-scale problems in many scientific disciplines, and are applicable to continuum based modelling techniques, in which the relationship between system properties is expressed with continuous mathematical equations or discrete modelling techniques that are based on individual units to model the heterogeneous microscopic elements such as individuals or cells. In this review, we survey these multi-scale methods and explore their application in systems biology.
Multiscale finite-element method for linear elastic geomechanics
Castelletto, Nicola; Hajibeygi, Hadi; Tchelepi, Hamdi A.
2017-02-01
The demand for accurate and efficient simulation of geomechanical effects is widely increasing in the geoscience community. High resolution characterizations of the mechanical properties of subsurface formations are essential for improving modeling predictions. Such detailed descriptions impose severe computational challenges and motivate the development of multiscale solution strategies. We propose a multiscale solution framework for the geomechanical equilibrium problem of heterogeneous porous media based on the finite-element method. After imposing a coarse-scale grid on the given fine-scale problem, the coarse-scale basis functions are obtained by solving local equilibrium problems within coarse elements. These basis functions form the restriction and prolongation operators used to obtain the coarse-scale system for the displacement-vector. Then, a two-stage preconditioner that couples the multiscale system with a smoother is derived for the iterative solution of the fine-scale linear system. Various numerical experiments are presented to demonstrate accuracy and robustness of the method.
Multiscale Modeling in the Clinic: Drug Design and Development
Clancy, Colleen E.; An, Gary; Cannon, William R.; Liu, Yaling; May, Elebeoba E.; Ortoleva, Peter; Popel, Aleksander S.; Sluka, James P.; Su, Jing; Vicini, Paolo; Zhou, Xiaobo; Eckmann, David M.
2016-02-17
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions to guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.
MULTISCALE MATHEMATICS FOR BIOMASS CONVERSION TO RENEWABLE HYDROGEN
Vlachos, Dionisios; Plechac, Petr; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.
Thompson, D.; Mogili, P.; Chalasani, S.; Addy, H.; Choo, Y.
2004-01-01
Steady-state solutions of the Reynolds-averaged Navier-Stokes (RANS) equations were computed using the Colbalt flow solver for a constant-section, rectangular wing based on an extruded two-dimensional glaze ice shape. The one equation Spalart-Allmaras turbulence model was used. The results were compared with data obtained from a recent wind tunnel test. Computed results indicate that the steady RANS solutions do not accurately capture the recirculating region downstream of the ice accretion, even after a mesh refinement. The resulting predicted reattachment is farther downstream than indicated by the experimental data. Additionally, the solutions computed on a relatively coarse baseline mesh had detailed flow characteristics that were different from those computed on the refined mesh or the experimental data. Steady RANS solutions were also computed to investigate the effects of spanwise variation in the ice shape. The spanwise variation was obtained via a bleeding function that merged the ice shape with the clean wing using a sinusoidal spanwise variation. For these configurations, the results predicted for the extruded shape provided conservative estimates for the performance degradation of the wing. Additionally, the spanwise variation in the ice shape and the resulting differences in the flow fields did not significantly change the location of the primary reattachment.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces
Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel;
2016-01-01
The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have ...
Semidefinite characterization and computation of zero-dimensional real radical ideals
J.B. Lasserre; M. Laurent (Monique); P. Rostalski
2008-01-01
textabstractFor an ideal I⊆ℝ[x] given by a set of generators, a new semidefinite characterization of its real radical I(V ℝ(I)) is presented, provided it is zero-dimensional (even if I is not). Moreover, we propose an algorithm using numerical linear algebra and semidefinite optimization
Jonathan P. Dandois; Erle C. Ellis
2013-01-01
High spatial resolution three-dimensional (3D) measurements of vegetation by remote sensing are advancing ecological research and environmental management. However, substantial economic and logistical costs limit this application, especially for observing phenological dynamics in ecosystem structure and spectral traits. Here we demonstrate a new aerial remote sensing...
Band Gap Computation of Two Dimensional Photonic Crystal for High Index Contrast Grating Application
Gagandeep Kaur
2014-05-01
Full Text Available Two Dimensional Photonic Crystal (PHc is convenient type of PHc, It refers to the fact that the dielectric is periodic in Two directions. The study of photonic structure by a simulation method is extremely momentous. At optical frequencies the optical density contained by two dimensional PHc changes periodically. They have the property to strong effect the propagation of light waves at these optical frequencies. A typical linearization method which solves the common nonlinear Eigen values difficulties has been used to achieve structures of the photonic band. There are two method plane wave expansion method (PWE and Finite Difference Time Domain method (FDTD. These Methods are most widely used for band gap calculation of PHc’s. FDTD Method has more smoothness and directness and can be explored effortlessly for simulation of the field circulation inside the photonic structure than PWE method so we have used FDTD Method for Two dimensional PHc’s calculation. In simulation of Two Dimensional band structures, silicon material has 0.543nm lattice constant and 1.46refractive index.
Wang, P.; Li, P.
1998-01-01
A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.
Xu, F.; Niu, J.; Chi, Z.; Xie, W.
2013-05-01
Analyzing and evaluating the reliability of multi-scale representation of spatial data are already becoming an important issue of the current digital cartography and GIS. Settlement place is the main content of maps. For this reason, studying on the uncertainty of multi-scale representation of settlement place is one of important contents of the uncertainty of multi-scale representation of spatial data. In this paper, uncertainty of multi-scale representation of street-block settlement was get comprehensive analysis and system research. This paper holds that map generalization is the essential cause leading to uncertainty of multi-scale representation of streetblock settlement. First, it is explored of essence and types of uncertainty on multi-scale representation of street-block settlement, and it divides these uncertainties into four large classes and seven subclasses. Second, among all kinds of uncertainties of multi-scale representation of street-block settlement, this paper mainly studies the uncertainty of settlement of street-block symbolic representation, and establishes the evaluation content and evaluation indexes and computing method of uncertainty of street-block and street network generalization and building generalization. The result can use for evaluating the good and bad of scale transfer methods and the uncertainty of products of multi-scale representation of street-block settlement.
Higashino, Takuya; Kawashima, Masatou; Mannoji, Hiromichi
2005-03-01
An 89-year-old man and a 60-year-old man presented with superficial temporal artery (STA) pseudoaneurysms which developed secondary to trauma. Conventional cerebral angiography and three-dimensional computed tomography (3D CT) angiography clearly demonstrated the STA pseudoaneurysms. The patients underwent surgical excision of the aneurysms based on the conventional cerebral angiography findings in one patient and 3D CT angiography findings in other patient. 3D CT angiography is an excellent noninvasive diagnostic method for detecting extracranial aneurysms such as STA pseudoaneurysm, especially the relationship between the aneurysm and surrounding structures, including the calvarium.
Buhmann, C. [Department of Neuroanatomy, Hannover Medical School (Germany)]|[University Hospital Eppendorf, Hamburg (Germany); Kretschmann, H.J. [Department of Neuroanatomy, Hannover Medical School (Germany)
1998-09-01
We present a three-dimensional (3D) anatomical computer-graphics model of the corticospinal system acquired from equidistant serial anatomical slices of six intracranially-fixed human brains. This model is part of a neuroanatomical reference system (NeuRef) which enables 3D visualization of the brain and shows the relationship of its components such as anatomical structures, functional fibre tracts and arteries. Sections through the models can be matched with corresponding CT or MR images. This allows the probable localisation of corticospinal fibres on CT or MRI. (orig.) (orig.) With 18 figs., 3 tabs., 40 refs.
Goh, Yin Peng; Lau, Kenneth K
2012-02-01
As described in this case report, the use of the 320-Multidetector Computed Tomography scanner (Aquilion One, Toshiba Medical Systems, Japan) to produce continuous 3-dimensional images in real time, over a distance of 16 cm in the z-axis, proved to aid in the diagnosis of a patient's restrictive elbow joint. This state-of-the-art scanner allows fast and noninvasive dynamic-kinematic functional evaluation of the elbow joint in vivo. It will also be applicable to kinematic studies of other joints.