Essential numerical computer methods
Johnson, Michael L
2010-01-01
The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...
Numerical computations with GPUs
Kindratenko, Volodymyr
2014-01-01
This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to
Computing the Alexander Polynomial Numerically
DEFF Research Database (Denmark)
Hansen, Mikael Sonne
2006-01-01
Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....
How Well Do Computer-Generated Faces Tap Face Expertise?
Directory of Open Access Journals (Sweden)
Kate Crookes
Full Text Available The use of computer-generated (CG stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces.
Numerical Analysis of Multiscale Computations
Engquist, Björn; Tsai, Yen-Hsi R
2012-01-01
This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.
Numerical computation of MHD equilibria
International Nuclear Information System (INIS)
Atanasiu, C.V.
1982-10-01
A numerical code for a two-dimensional MHD equilibrium computation has been carried out. The code solves the Grad-Shafranov equation in its integral form, for both formulations: the free-boundary problem and the fixed boundary one. Examples of the application of the code to tokamak design are given. (author)
Numerical methods in matrix computations
Björck, Åke
2015-01-01
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.
Numerical and symbolic scientific computing
Langer, Ulrich
2011-01-01
The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from
Numerical Computation of Detonation Stability
Kabanov, Dmitry
2018-06-03
Detonation is a supersonic mode of combustion that is modeled by a system of conservation laws of compressible fluid mechanics coupled with the equations describing thermodynamic and chemical properties of the fluid. Mathematically, these governing equations admit steady-state travelling-wave solutions consisting of a leading shock wave followed by a reaction zone. However, such solutions are often unstable to perturbations and rarely observed in laboratory experiments. The goal of this work is to study the stability of travelling-wave solutions of detonation models by the following novel approach. We linearize the governing equations about a base travelling-wave solution and solve the resultant linearized problem using high-order numerical methods. The results of these computations are postprocessed using dynamic mode decomposition to extract growth rates and frequencies of the perturbations and predict stability of travelling-wave solutions to infinitesimal perturbations. We apply this approach to two models based on the reactive Euler equations for perfect gases. For the first model with a one-step reaction mechanism, we find agreement of our results with the results of normal-mode analysis. For the second model with a two-step mechanism, we find that both types of admissible travelling-wave solutions exhibit the same stability spectra. Then we investigate the Fickett’s detonation analogue coupled with a particular reaction-rate expression. In addition to the linear stability analysis of this model, we demonstrate that it exhibits rich nonlinear dynamics with multiple bifurcations and chaotic behavior.
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
Numerical computer methods part D
Johnson, Michael L
2004-01-01
The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.
Adapted all-numerical correlator for face recognition applications
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Parallel computing: numerics, applications, and trends
National Research Council Canada - National Science Library
Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter
2009-01-01
... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...
A History of Computer Numerical Control.
Haggen, Gilbert L.
Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…
The Use of Computer-Mediated Communication To Enhance Subsequent Face-to-Face Discussions.
Dietz-Uhler, Beth; Bishop-Clark, Cathy
2001-01-01
Describes a study of undergraduate students that assessed the effects of synchronous (Internet chat) and asynchronous (Internet discussion board) computer-mediated communication on subsequent face-to-face discussions. Results showed that face-to-face discussions preceded by computer-mediated communication were perceived to be more enjoyable.…
Numerical computer methods part E
Johnson, Michael L
2004-01-01
The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.
Numerical Optimization Using Desktop Computers
1980-09-11
geophysical, optical and economic analysis to compute a life-cycle cost for a design with a stated energy capacity. NISCO stands for NonImaging ...more efficiently by nonimaging optical systems than by conventional image forming systems. The methodology of designing optimized ronimaging systems...compound parabolic concentrating iWelford, W. T. and Winston, R., The Optics of Nonimaging Concentrators, Light and Solar Energy, p. ix, Academic
Numerical simulation of abutment pressure redistribution during face advance
Klishin, S. V.; Lavrikov, S. V.; Revuzhenko, A. F.
2017-12-01
The paper presents numerical simulation data on the abutment pressure redistribution in rock mass during face advance, including isolines of maximum shear stress and pressure epures. The stress state of rock in the vicinity of a breakage heading is calculated by the finite element method using a 2D nonlinear model of a structurally heterogeneous medium with regard to plasticity and internal self-balancing stress. The thus calculated stress field is used as input data for 3D discrete element modeling of the process. The study shows that the abutment pressure increases as the roof span extends and that the distance between the face breast and the peak point of this pressure depends on the elastoplastic properties and internal self-balancing stress of a rock medium.
Fluid dynamics theory, computation, and numerical simulation
Pozrikidis, C
2001-01-01
Fluid Dynamics Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes Two distinguishing features of the discourse are solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty Matlab codes are presented and discussed for a broad...
Fluid Dynamics Theory, Computation, and Numerical Simulation
Pozrikidis, Constantine
2009-01-01
Fluid Dynamics: Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner. The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming. This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice. There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes. Two distinguishing features of the discourse are: solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty. Matlab codes are presented and discussed for ...
Numerical methods for Bayesian inference in the face of aging
International Nuclear Information System (INIS)
Clarotti, C.A.; Villain, B.; Procaccia, H.
1996-01-01
In recent years, much attention has been paid to Bayesian methods for Risk Assessment. Until now, these methods have been studied from a theoretical point of view. Researchers have been mainly interested in: studying the effectiveness of Bayesian methods in handling rare events; debating about the problem of priors and other philosophical issues. An aspect central to the Bayesian approach is numerical computation because any safety/reliability problem, in a Bayesian frame, ends with a problem of numerical integration. This aspect has been neglected until now because most Risk studies assumed the Exponential model as the basic probabilistic model. The existence of conjugate priors makes numerical integration unnecessary in this case. If aging is to be taken into account, no conjugate family is available and the use of numerical integration becomes compulsory. EDF (National Board of Electricity, of France) and ENEA (National Committee for Energy, New Technologies and Environment, of Italy) jointly carried out a research program aimed at developing quadrature methods suitable for Bayesian Interference with underlying Weibull or gamma distributions. The paper will illustrate the main results achieved during the above research program and will discuss, via some sample cases, the performances of the numerical algorithms which on the appearance of stress corrosion cracking in the tubes of Steam Generators of PWR French power plants. (authors)
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Introduction to numerical computation in Pascal
Dew, P M
1983-01-01
Our intention in this book is to cover the core material in numerical analysis normally taught to students on degree courses in computer science. The main emphasis is placed on the use of analysis and programming techniques to produce well-designed, reliable mathematical software. The treatment should be of interest also to students of mathematics, science and engineering who wish to learn how to write good programs for mathematical computations. The reader is assumed to have some acquaintance with Pascal programming. Aspects of Pascal particularly relevant to numerical computation are revised and developed in the first chapter. Although Pascal has some drawbacks for serious numerical work (for example, only one precision for real numbers), the language has major compensating advantages: it is a widely used teaching language that will be familiar to many students and it encourages the writing of clear, well structured programs. By careful use of structure and documentation, we have produced codes that we be...
Numerical computation of linear instability of detonations
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
Numerical simulation of runaway electron effect on Plasma Facing Components
International Nuclear Information System (INIS)
Ezato, Koichiro; Suzuki, Satoshi; Akiba, Masato; Kunugi, Tomoaki
1998-07-01
The runaway electron effects on Plasma Facing Components (PFCs) are studied by the numerical analyses. The present study is the first investigation of time-dependent thermal response of PFCs caused by runaway electron impact. For this purpose, we developed a new integrated numerical code, which consists of the Monte Carlo code for the coupled electrons and photons transport analysis and the finite element code for the thermo-mechanical analysis. In this code, we apply the practical incident parameters and distribution of runaway electrons recently proposed by S. Putvinski, which can express the time-dependent behavior of runaway electrons impact. The incident parameters of electrons in this study are the energy density ranging from 10 to 75 MJ/m 2 , the average electrons' energy of 12.5 MeV, the incident angle of 0.01deg and the characteristic time constant for decay of runaway electrons event of 0.15sec. The numerical results showed that the divertor with CFC (Carbon-Fiber-Composite) armor did not suffer serious damage. On the other hand, maximum temperatures at the surface of the divertor with tungsten armor and the first wall with beryllium armor exceed the melting point in case of the incident energy density of 20 and 50 MJ/m 2 . Within the range of the incident condition of runaway electrons, the cooling pipe of each PFCs can be prevented from the melting or burn-out caused by runaway electrons impact, which is one of the possible consequences of runaway electrons event so far. (author)
Fluid dynamics theory, computation, and numerical simulation
Pozrikidis, C
2017-01-01
This book provides an accessible introduction to the basic theory of fluid mechanics and computational fluid dynamics (CFD) from a modern perspective that unifies theory and numerical computation. Methods of scientific computing are introduced alongside with theoretical analysis and MATLAB® codes are presented and discussed for a broad range of topics: from interfacial shapes in hydrostatics, to vortex dynamics, to viscous flow, to turbulent flow, to panel methods for flow past airfoils. The third edition includes new topics, additional examples, solved and unsolved problems, and revised images. It adds more computational algorithms and MATLAB programs. It also incorporates discussion of the latest version of the fluid dynamics software library FDLIB, which is freely available online. FDLIB offers an extensive range of computer codes that demonstrate the implementation of elementary and advanced algorithms and provide an invaluable resource for research, teaching, classroom instruction, and self-study. This ...
Ferrofluids: Modeling, numerical analysis, and scientific computation
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a
Human face recognition using eigenface in cloud computing environment
Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.
2018-02-01
Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.
The Many Faces of Computational Artifacts
DEFF Research Database (Denmark)
Christensen, Lars Rune; Harper, Richard
2016-01-01
Building on data from fieldwork at a medical department, this paper focuses on the varied nature of computational artifacts in practice. It shows that medical practice relies on multiple heterogeneous computational artifacts that form complex constellations. In the hospital studied the computatio...
Grid computing faces IT industry test
Magno, L
2003-01-01
Software company Oracle Corp. unveiled it's Oracle 10g grid computing platform at the annual OracleWorld user convention in San Francisco. It gave concrete examples of how grid computing can be a viable option outside the scientific community where the concept was born (1 page).
The Application of Visual Basic Computer Programming Language to Simulate Numerical Iterations
Directory of Open Access Journals (Sweden)
Abdulkadir Baba HASSAN
2006-06-01
Full Text Available This paper examines the application of Visual Basic Computer Programming Language to Simulate Numerical Iterations, the merit of Visual Basic as a Programming Language and the difficulties faced when solving numerical iterations analytically, this research paper encourage the uses of Computer Programming methods for the execution of numerical iterations and finally fashion out and develop a reliable solution using Visual Basic package to write a program for some selected iteration problems.
Collaborative Dialogue in Synchronous Computer-Mediated Communication and Face-to-Face Communication
Zeng, Gang
2017-01-01
Previous research has documented that collaborative dialogue promotes L2 learning in both face-to-face (F2F) and synchronous computer-mediated communication (SCMC) modalities. However, relatively little research has explored modality effects on collaborative dialogue. Thus, motivated by sociocultual theory, this study examines how F2F compares…
Saltarelli, Andy J.; Roseth, Cary J.
2014-01-01
Adapting face-to-face (FTF) pedagogies to online settings raises boundary questions about the contextual conditions in which the same instructional method stimulates different outcomes. We address this issue by examining FTF and computer-mediated communication (CMC) versions of constructive controversy, a cooperative learning procedure involving…
Learners' Willingness to Communicate in Face-to-Face versus Oral Computer-Mediated Communication
Yanguas, Íñigo; Flores, Alayne
2014-01-01
The present study had two main goals: to explore performance differences in a task-based environment between face-to-face (FTF) and oral computer-mediated communication (OCMC) groups, and to investigate the relationship between trait-like willingness to communicate (WTC) and performance in the FTF and OCMC groups. Students from two intact…
Learning Opportunities in Synchronous Computer-Mediated Communication and Face-to-Face Interaction
Kim, Hye Yeong
2014-01-01
This study investigated how synchronous computer-mediated communication (SCMC) and face-to-face (F2F) oral interaction influence the way in which learners collaborate in language learning and how they solve their communicative problems. The findings suggest that output modality may affect how learners produce language, attend to linguistic forms,…
Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help?
Iordanou, Kalypso
2013-01-01
Arguing on the computer was used as a method to promote development of face-to-face argumentation skills in middle schoolers. In the study presented, sixth graders engaged in electronic dialogues with peers on a controversial topic and in some reflective activities based on transcriptions of the dialogues. Although participants initially exhibited…
Face-to-face versus computer-mediated communication in a primary school setting
Meijden, H.A.T. van der; Veenman, S.A.M.
2005-01-01
Computer-mediated communication is increasingly being used to support cooperative problem solving and decision making in schools. Despite the large body of literature on cooperative or collaborative learning, few studies have explicitly compared peer learning in face-to-face (FTF) versus
Tan, Lan Liana; Wigglesworth, Gillian; Storch, Neomy
2010-01-01
In today's second language classrooms, students are often asked to work in pairs or small groups. Such collaboration can take place face-to-face, but now more often via computer mediated communication. This paper reports on a study which investigated the effect of the medium of communication on the nature of pair interaction. The study involved…
Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.
Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O
2014-12-01
Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Integrated optical circuits for numerical computation
Verber, C. M.; Kenan, R. P.
1983-01-01
The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.
Computer-mediated and face-to-face communication in metastatic cancer support groups.
Vilhauer, Ruvanee P
2014-08-01
To compare the experiences of women with metastatic breast cancer (MBC) in computer-mediated and face-to-face support groups. Interviews from 18 women with MBC, who were currently in computer-mediated support groups (CMSGs), were examined using interpretative phenomenological analysis. The CMSGs were in an asynchronous mailing list format; women communicated exclusively via email. All the women were also, or had previously been, in a face-to-face support group (FTFG). CMSGs had both advantages and drawbacks, relative to face-to-face groups (FTFGs), for this population. Themes examined included convenience, level of support, intimacy, ease of expression, range of information, and dealing with debilitation and dying. CMSGs may provide a sense of control and a greater level of support. Intimacy may take longer to develop in a CMSG, but women may have more opportunities to get to know each other. CMSGs may be helpful while adjusting to a diagnosis of MBC, because women can receive support without being overwhelmed by physical evidence of disability in others or exposure to discussions about dying before they are ready. However, the absence of nonverbal cues in CMSGs also led to avoidance of topics related to death and dying when women were ready to face them. Agendas for discussion, the presence of a facilitator or more time in CMSGs may attenuate this problem. The findings were discussed in light of prevailing research and theories about computer-mediated communication. They have implications for designing CMSGs for this population.
International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics
DEVELOPMENTS IN RELIABLE COMPUTING
1999-01-01
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...
COMPUTATIONAL ANALYSIS OF BACKWARD-FACING STEP FLOW
Directory of Open Access Journals (Sweden)
Erhan PULAT
2001-01-01
Full Text Available In this study, backward-facing step flow that are encountered in electronic systems cooling, heat exchanger design, and gas turbine cooling are investigated computationally. Steady, incompressible, and two-dimensional air flow is analyzed. Inlet velocity is assumed uniform and it is obtained from parabolic profile by using maximum velocity. In the analysis, the effects of channel expansion ratio and Reynolds number to reattachment length are investigated. In addition, pressure distribution throughout the channel length is also obtained and flow is analyzed for the Reynolds number values of 50 and 150 and channel expansion ratios of 1.5 and 2. Governing equations are solved by using Galerkin finite element mothod of ANSYS-FLOTRAN code. Obtained results are compared with the solutions of lattice BGK method that is relatively new method in fluid dynamics and other numerical and experimental results. It is concluded that reattachment length increases with increasing Reynolds number and at the same Reynolds number it decreases with increasing channel expansion ratio.
Numerical analysis of the non-contacting gas face seals
Blasiak, S.
2017-08-01
The non-contacting gas face seals are used in high-performance devices where the main requirements are safety and reliability. Compliance with these requirements is made possible by careful research and analysis of physical processes related to, inter alia, fluid flow through the radial gap and ring oscillations susceptible to being housed in the enclosure under the influence of rotor kinematic forces. Elaborating and developing mathematical models describing these phenomena allows for more and more accurate analysis results. The paper presents results of studies on stationary ring oscillations made of different types of materials. The presented results of the research allow to determine which of the materials used causes the greatest amplitude of the vibration of the system fluid film-working rings.
Decision Accuracy in Computer-Mediated versus Face-to-Face Decision-Making Teams.
Hedlund; Ilgen; Hollenbeck
1998-10-01
Changes in the way organizations are structured and advances in communication technologies are two factors that have altered the conditions under which group decisions are made. Decisions are increasingly made by teams that have a hierarchical structure and whose members have different areas of expertise. In addition, many decisions are no longer made via strictly face-to-face interaction. The present study examines the effects of two modes of communication (face-to-face or computer-mediated) on the accuracy of teams' decisions. The teams are characterized by a hierarchical structure and their members differ in expertise consistent with the framework outlined in the Multilevel Theory of team decision making presented by Hollenbeck, Ilgen, Sego, Hedlund, Major, and Phillips (1995). Sixty-four four-person teams worked for 3 h on a computer simulation interacting either face-to-face (FtF) or over a computer network. The communication mode had mixed effects on team processes in that members of FtF teams were better informed and made recommendations that were more predictive of the correct team decision, but leaders of CM teams were better able to differentiate staff members on the quality of their decisions. Controlling for the negative impact of FtF communication on staff member differentiation increased the beneficial effect of the FtF mode on overall decision making accuracy. Copyright 1998 Academic Press.
Numerical simulation of a backward-facing step flow in a microchannel with external electric field
Directory of Open Access Journals (Sweden)
Qing-He Yao
2015-03-01
Full Text Available A backward-facing step flow in the microchannel with external electric field was investigated numerically by a high-order accuracy upwind compact difference scheme in this work. The Poisson–Boltzmann and Navier–Stokes equations were computed by the high-order scheme, and the results confirmed the ability of the new solver in simulation of micro-scale electric double layer effects. The flow fields were displayed for different Reynolds numbers; the positions of the vortex saddle point of model with external electric field and model without external electric field were compared. The average velocity increases linearly with the electric field intensity; however, the Joule heating effects cannot be neglected when the electric field intensity increases to a certain level.
NINJA: Java for High Performance Numerical Computing
Directory of Open Access Journals (Sweden)
José E. Moreira
2002-01-01
Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.
Energy Technology Data Exchange (ETDEWEB)
Farahbakhsh, Iman; Paknejad, Amin; Ghassemi, Hassan [Amirkabir Univ. of Technology, Tehran (Iran, Islamic Republic of)
2012-10-15
This paper presents the numerical solutions of a two dimensional laminar flow over a backward facing step in the presence of the Lorentz body force. The Navier Stokes equations in a vorticity stream function formulation are numerically solved using a uniform grid mesh of 2001 {Chi} 51 points. A second order central difference approximation is used for spatial derivatives. The solutions progress in time with a fourth order Runge Kutta method. The unsteady backward facing step flow solution is computed for Reynolds numbers 100 to 800. The size and genesis of the recirculating regions are dramatically affected by applying the Lorentz force. The results demonstrate that using an appropriate configuration for applying the Lorentz force can make it an essential tool for controlling the flow in channels with a backward facing step.
Numerical discrepancy between serial and MPI parallel computations
Directory of Open Access Journals (Sweden)
Sang Bong Lee
2016-09-01
Full Text Available Numerical simulations of 1D Burgers equation and 2D sloshing problem were carried out to study numerical discrepancy between serial and parallel computations. The numerical domain was decomposed into 2 and 4 subdomains for parallel computations with message passing interface. The numerical solution of Burgers equation disclosed that fully explicit boundary conditions used on subdomains of parallel computation was responsible for the numerical discrepancy of transient solution between serial and parallel computations. Two dimensional sloshing problems in a rectangular domain were solved using OpenFOAM. After a lapse of initial transient time sloshing patterns of water were significantly different in serial and parallel computations although the same numerical conditions were given. Based on the histograms of pressure measured at two points near the wall the statistical characteristics of numerical solution was not affected by the number of subdomains as much as the transient solution was dependent on the number of subdomains.
Computer security threats faced by small businesses in Australia
Hutchings, Alice
2012-01-01
In this paper, an overview is provided of computer security threats faced by small businesses. Having identified the threats, the implications for small business owners are described, along with countermeasures that can be adopted to prevent incidents from occurring. The results of the Australian Business Assessment of Computer User Security (ABACUS) survey, commissioned by the Australian Institute of Criminology (AIC), are drawn upon to identify key risks (Challice 2009; Richards 2009). Addi...
Stable numerical method in computation of stellar evolution
International Nuclear Information System (INIS)
Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.
1982-01-01
To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)
Roseth, Cary J.; Saltarelli, Andy J.; Glass, Chris R.
2011-01-01
Cooperative learning capitalizes on the relational processes by which peers promote learning, yet it remains unclear whether these processes operate similarly in face-to-face and online settings. This study addresses this issue by comparing face-to-face and computer-mediated versions of "constructive controversy", a cooperative learning procedure…
Directory of Open Access Journals (Sweden)
Aguert Marc
2016-12-01
Full Text Available The literature suggests that irony production expands in the developmental period of adolescence. We aimed to test this hypothesis by investigating two channels: face-to-face and computer-mediated communication (CMC. Corpora were collected by asking seventh and 11th graders to freely discuss some general topics (e.g., music, either face-to-face or on online forums. Results showed that 6.2% of the 11th graders’ productions were ironic utterances, compared with just 2.5% of the seventh graders’ productions, confirming the major development of irony production in adolescence. Results also showed that adolescents produced more ironic utterances in CMC than face-to-face. The analysis suggested that irony use is a strategy for increasing in-group solidarity and compensating for the distance intrinsic to CMC, as it was mostly inclusive and well-marked on forums. The present study also confirmed previous studies showing that irony is compatible with CMC.
Research in applied mathematics, numerical analysis, and computer science
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
The numerical parallel computing of photon transport
International Nuclear Information System (INIS)
Huang Qingnan; Liang Xiaoguang; Zhang Lifa
1998-12-01
The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten
Topics in numerical partial differential equations and scientific computing
2016-01-01
Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.
An Evaluation of Java for Numerical Computing
Directory of Open Access Journals (Sweden)
Brian Blount
1999-01-01
Full Text Available This paper describes the design and implementation of high performance numerical software in Java. Our primary goals are to characterize the performance of object‐oriented numerical software written in Java and to investigate whether Java is a suitable language for such endeavors. We have implemented JLAPACK, a subset of the LAPACK library in Java. LAPACK is a high‐performance Fortran 77 library used to solve common linear algebra problems. JLAPACK is an object‐oriented library, using encapsulation, inheritance, and exception handling. It performs within a factor of four of the optimized Fortran version for certain platforms and test cases. When used with the native BLAS library, JLAPACK performs comparably with the Fortran version using the native BLAS library. We conclude that high‐performance numerical software could be written in Java if a handful of concerns about language features and compilation strategies are adequately addressed.
A novel polar-based human face recognition computational model
Directory of Open Access Journals (Sweden)
Y. Zana
2009-07-01
Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Numerical computation of homogeneous slope stability.
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).
Numerical Computation of Homogeneous Slope Stability
Directory of Open Access Journals (Sweden)
Shuangshuang Xiao
2015-01-01
Full Text Available To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM and particle swarm optimization algorithm (PSO to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759 were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS.
Numerical characteristics of quantum computer simulation
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
Computing complex Airy functions by numerical quadrature
A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)
2001-01-01
textabstractIntegral representations are considered of solutions of the Airydifferential equation w''-z, w=0 for computing Airy functions for complex values of z.In a first method contour integral representations of the Airyfunctions are written as non-oscillating
Numerical computation of generalized importance functions
International Nuclear Information System (INIS)
Gomit, J.M.; Nasr, M.; Ngyuen van Chi, G.; Pasquet, J.P.; Planchard, J.
1981-01-01
Thus far, an important effort has been devoted to developing and applying generalized perturbation theory in reactor physics analysis. In this work we are interested in the calculation of the importance functions by the method of A. Gandini. We have noted that in this method the convergence of the iterative procedure adopted is not rapid. Hence to accelerate this convergence we have used the semi-iterative technique. Two computer codes have been developed for one and two dimensional calculations (SPHINX-1D and SPHINX-2D). The advantage of our calculation was confirmed by some comparative tests in which the iteration number and the computing time were highly reduced with respect to classical calculation (CIAP-1D and CIAP-2D). (orig.) [de
Wainfan, Lynne; Davis, Paul K.
2004-08-01
As we increase our reliance on mediated communication, it is important to be aware the media's influence on group processes and outcomes. A review of 40+ years of research shows that all media-videoconference, audioconference, and computer-mediated communication--change the context of the communication to some extent, reducing cues used to regulate and understand conversation, indicate participants' power and status, and move the group towards agreement. Text-based computer-mediated communication, the "leanest" medum, reduces status effects, domination, and consensus. This has been shown useful in broadening the range of inputs and ideas. However, it has also been shown to increase polarization, deindividuation, and disinhibition, and the time to reach a conclusion. For decision-making tasks, computer-mediated communication can increase choice shift and the likelihood of more risky or extreme decisions. In both videoconference and audioconference, participants cooperate less with linked collaborators, and shift their opinions toward extreme options, compared with face-to-face collaboration. In videoconference and audioconference, local coalitions can form where participants tend to agree more with those in the same room than those on the other end of the line. There is also a tendency in audioconference to disagree with those on the other end of the phone. This paper is a summary of a much more extensive forthcoming report; it reviews the research literature and proposes strategies to leverage the benefits of mediated communication while mitigating its adverse effects.
Numerical cosmology: Revealing the universe using computers
International Nuclear Information System (INIS)
Centrella, J.; Matzner, R.A.; Tolman, B.W.
1986-01-01
In this paper the authors present two research projects which study the evolution of different periods in the history of the universe using numerical simulations. The first investigates the synthesis of light elements in an inhomogeneous early universe dominated by shocks and non-linear gravitational waves. The second follows the evolution of large scale structures during the later history of the universe and calculates their effect on the 3K background radiation. Their simulations are carried out using modern supercomputers and make heavy use of multidimensional color graphics, including film to elucidate the results. Both projects provide the authors the opportunity to do experiments in cosmology and assess their results against fundamental cosmological observations
Numerical computation of special functions with applications to physics
CSIR Research Space (South Africa)
Motsepe, K
2008-09-01
Full Text Available Students of mathematical physics, engineering, natural and biological sciences sometimes need to use special functions that are not found in ordinary mathematical software. In this paper a simple universal numerical algorithm is developed to compute...
Numerical aspects for efficient welding computational mechanics
Directory of Open Access Journals (Sweden)
Aburuga Tarek Kh.S.
2014-01-01
Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.
Computed tomography of tumors of paranasal sinuses and face
Energy Technology Data Exchange (ETDEWEB)
Lee, Sun Wha [Kyung Hee University College of Medicine, Seoul (Korea, Republic of)
1982-09-15
The computed tomography can image both bone and soft tissue structures of paranasal sinuses and face and so CT has added an important new dimension to radiological evaluation of disease of paranasal sinuses and face. CT is more accurate method of staging of tumors and essential for therapeutic planning. The author studied 25 cases of proven tumors of paranasal sinuses and face during the period from October 1977 to August 1980 in Kyung Hee University Hospital. The results were as follows: 1. Among 14 females and 11 male, their age range was from 14 years to 65 year. 2. The distribution of tumors were mucocele, squamous cell carcinoma, metastatic carcinoma, meningioma, angiofibroma, Masson's hemangiosarcoma, fibrous dysplasia, neurogenic sarcoma, Schwannoma, hemangioma, epidermoid, transitional cell carcinoma and unknown. 3. Determination of location and extent of mucocele was easily done by CT. Thus in all cases of ethmoid mucocele, chief complaint of exophthalmos could be easily explained by identification of its extension into peripheral fat space of orbit. 4. It is our belief that CT was useful method to determine staging of tumors of paranasal sinuses and was essential in choosing appropriate treatment modality. 5. The contrast enhancement is generally not helpful in pathologic diagnosis of tumors but intracranial extension of tumors are clearly defined by contrast enhancement.
Univolatility curves in ternary mixtures: geometry and numerical computation
DEFF Research Database (Denmark)
Shcherbakova, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens
2017-01-01
We propose a new non-iterative numerical algorithm allowing computation of all univolatility curves in homogeneous ternary mixtures independently of the presence of the azeotropes. The key point is the concept of generalized univolatility curves in the 3D state space, which allows the main comput...
Numerical methods design, analysis, and computer implementation of algorithms
Greenbaum, Anne
2012-01-01
Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...
Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code
International Nuclear Information System (INIS)
Scott, T.C.; Grant, I.P.; Saunders, V.R.
1997-01-01
The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)
A summary of numerical computation for special functions
International Nuclear Information System (INIS)
Zhang Shanjie
1992-01-01
In the paper, special functions frequently encountered in science and engineering calculations are introduced. The computation of the values of Bessel function and elliptic integrals are taken as the examples, and some common algorithms for computing most special functions, such as series expansion for small argument, asymptotic approximations for large argument, polynomial approximations, recurrence formulas and iteration method, are discussed. In addition, the determination of zeros of some special functions, and the other questions related to numerical computation are also discussed
Numerical simulation of 3D backward facing step flows at various Reynolds numbers
Directory of Open Access Journals (Sweden)
Louda Petr
2015-01-01
Full Text Available The work deals with the numerical simulation of 3D turbulent flow over backward facing step in a narrow channel. The mathematical model is based on the RANS equations with an explicit algebraic Reynolds stress model (EARSM. The numerical method uses implicit finite volume upwind discretization. While the eddy viscosity models fail in predicting complex 3D flows, the EARSM model is shown to provide results which agree well with experimental PIV data. The reference experimental data provide the 3D flow field. The simulations are compared with experiment for 3 values of Reynolds number.
O'Rourke, Sean; Eskritt, Michelle; Bosacki, Sandra
2018-06-01
We explored Canadian adolescents', emergent adults', and adults' understandings of deception in computer mediated communication (CMC) compared to face to face (FtF). Participants between 13 and 50 years read vignettes of different types of questionable behaviour that occurred online or in real life, and were asked to judge whether deception was involved, and the acceptability of the behaviour. Age groups evaluated deception similarly; however, adolescents held slightly different views from adults about what constitutes deception, suggesting that the understanding of deception continues to develop into adulthood. Furthermore, CMC behaviour was rated as more deceptive than FtF in general, and participants scoring higher on compassion perceived vignettes to be more deceptive. This study is a step towards better understanding the relationships between perceptions of deception across adolescence into adulthood, mode of communication, and compassion, and may have implications for how adults communicate with youth about deception in CMC and FtF contexts. Copyright © 2018 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Boyle, Andrea M; O'Sullivan, Lucia F
2016-05-01
Little is known about the features, depth, and quality of communication in heterosexual dating relationships that include computer-mediated communication (CMC). This study examined these features as well as CMC's potential to facilitate self-disclosure and information-seeking. It also evaluated whether partner CMC interactions play a role in partner intimacy and communication quality. Young adults (N = 359; 18-24) attending postsecondary education institutions completed an online survey about their CMC use. To be included in the study, all participants were in established dating relationships at the time of the study and reported daily communication with their partner. CMC was linked to partners' disclosure of nonintimate information. This personal self-disclosure was linked positively to relationship intimacy and communication quality, beyond contributions from face-to-face interactions. Breadth (not depth) of self-disclosure and positively valenced interactions, in particular, proved key to understanding greater levels of intimacy in dating relationships and better communication quality as a function of CMC. CMC provides opportunities for partners to stay connected and to improve the overall quality of their intimacy and communication.
NEGOTIATING COMMON GROUND IN COMPUTER-MEDIATED VERSUS FACE-TO-FACE DISCUSSIONS
Directory of Open Access Journals (Sweden)
Ilona Vandergriff
2006-01-01
Full Text Available To explore the impact of the communication medium on building common ground, this article presents research comparing learner use of reception strategies in traditional face-to-face (FTF and in synchronous computer-mediated communication (CMC.Reception strategies, such as reprises, hypothesis testing and forward inferencing provide evidence of comprehension and thus serve to establish common ground among participants. A number of factors, including communicative purpose or medium are hypothesized to affect the use of such strategies (Clark & Brennan, 1991. In the data analysis, I 1 identify specific types of reception strategies, 2 compare their relative frequencies by communication medium, by task, and by learner and 3 describe how these reception strategies function in the discussions. The findings of the quantitative analysis show that the medium alone seems to have little impact on grounding as indicated by use of reception strategies. The qualitative analysis provides evidence that participants adapted the strategies to the goals of the communicative interaction as they used them primarily to negotiate and update common ground on their collaborative activity rather than to compensate for L2 deficiencies.
Roseth, Cary; Akcaoglu, Mete; Zellner, Andrea
2013-01-01
Online education is often assumed to be synonymous with asynchronous instruction, existing apart from or supplementary to face-to-face instruction in traditional bricks-and-mortar classrooms. However, expanding access to computer-mediated communication technologies now make new models possible, including distance learners synchronous online…
Rouhshad, Amir; Wigglesworth, Gillian; Storch, Neomy
2016-01-01
The Interaction Approach argues that negotiation for meaning and form is conducive to second language development. To date, most of the research on negotiations has been either in face-to-face (FTF) or text-based synchronous computer-mediated communication (SCMC) modes. Very few studies have compared the nature of negotiations across the modes.…
The Effects of Face-to-Face and Computer-Mediated Peer Review on EFL Writers' Comments and Revisions
Ho, Mei-ching
2015-01-01
This study investigates the use of face-to-face and computer-mediated peer review in an English as a Foreign Language (EFL) writing course to examine how different interaction modes affect comment categories, students' revisions, and their perceptions of peer feedback. The participants were an intact class of 13 students at a Taiwanese university.…
Numerical simulation in a two dimensional turbulent flow over a backward-facing step
International Nuclear Information System (INIS)
Silveira Neto, A. da; Grand, D.
1991-01-01
Numerical simulations of turbulent flows in complex geometries are generally restricted to the prediction of the mean flow and use semi-empirical turbulence models. The present study is devoted to the simulation of the coherence structures which develop in a flow submitted to a velocity change, downstream of a backward facing step. Two aspect ratios (height of the step over height of the channel) have been explored and the values of the Reynolds number vary from (6000 to 90000). In the isothermal case coherent structures have been obtained by the numerical simulation in the mixing layer downstream of the step. The numerical simulations provides results in fairly good agreement with available experimental results. In a second step a thermal stratification is imposed on this flow for one value of Richardson number (0.5) the coherent structures disappear downstream for increasing values of Richardson number. (author)
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
A textbook of computer based numerical and statistical techniques
Jaiswal, AK
2009-01-01
About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in `C` langu
Flashing characters with famous faces improves ERP-based brain-computer interface performance
Kaufmann, T.; Schulz, S. M.; Grünzinger, C.; Kübler, A.
2011-10-01
Currently, the event-related potential (ERP)-based spelling device, often referred to as P300-Speller, is the most commonly used brain-computer interface (BCI) for enhancing communication of patients with impaired speech or motor function. Among numerous improvements, a most central feature has received little attention, namely optimizing the stimulus used for eliciting ERPs. Therefore we compared P300-Speller performance with the standard stimulus (flashing characters) against performance with stimuli known for eliciting particularly strong ERPs due to their psychological salience, i.e. flashing familiar faces transparently superimposed on characters. Our results not only indicate remarkably increased ERPs in response to familiar faces but also improved P300-Speller performance due to a significant reduction of stimulus sequences needed for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-Speller.
Numerical problems with the Pascal triangle in moment computation
Czech Academy of Sciences Publication Activity Database
Kautsky, J.; Flusser, Jan
2016-01-01
Roč. 306, č. 1 (2016), s. 53-68 ISSN 0377-0427 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : moment computation * Pascal triangle * appropriate polynomial basis * numerical problems Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0459096.pdf
Numerical computation of aeroacoustic transfer functions for realistic airfoils
De Santana, Leandro Dantas; Miotto, Renato Fuzaro; Wolf, William Roberto
2017-01-01
Based on Amiet's theory formalism, we propose a numerical framework to compute the aeroacoustic transfer function of realistic airfoil geometries. The aeroacoustic transfer function relates the amplitude and phase of an incoming periodic gust to the respective unsteady lift response permitting,
Computer-Numerical-Control and the EMCO Compact 5 Lathe.
Mullen, Frank M.
This laboratory manual is intended for use in teaching computer-numerical-control (CNC) programming using the Emco Maier Compact 5 Lathe. Developed for use at the postsecondary level, this material contains a short introduction to CNC machine tools. This section covers CNC programs, CNC machine axes, and CNC coordinate systems. The following…
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-09-19
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-01-01
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Introduction to Numerical Computation - analysis and Matlab illustrations
DEFF Research Database (Denmark)
Elden, Lars; Wittmeyer-Koch, Linde; Nielsen, Hans Bruun
In a modern programming environment like eg MATLAB it is possible by simple commands to perform advanced calculations on a personal computer. In order to use such a powerful tool efiiciently it is necessary to have an overview of available numerical methods and algorithms and to know about...... are illustrated by examples in MATLAB....
Numeric computation and statistical data analysis on the Java platform
Chekanov, Sergei V
2016-01-01
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...
Numerical computation of gravitational field for general axisymmetric objects
Fukushima, Toshio
2016-10-01
We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.
Numerical methods and computers used in elastohydrodynamic lubrication
Hamrock, B. J.; Tripp, J. H.
1982-01-01
Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.
Numerical analysis of transient heat conduction in downward-facing curved sections during quenching
International Nuclear Information System (INIS)
Gao, C.; El-Genk, M.S.
1996-01-01
Pool boiling from downward-facing surfaces is of interest in many applications such as cooling of electric cables, handling of containers of hazardous liquids and external cooling of nuclear reactor vessels. Here, a two-dimensional numerical analysis was performed to determine pool boiling curves from downward-facing curved stainless-steel and copper surfaces during quenching in saturated water. To ensure stability and accuracy of the numerical solution, the alternating direction implicit (ADI) method based on finite control volume representations was employed. A time dependent boundary condition was provided by the measured temperature at nine interior locations near the boiling surface. Best results were obtained using a grid of 20x20 CVs and a non-iterative approach. Calculated temperatures near the top surface of the metal sections agreed with measured values to within 0.5 K and 2.5 K for the copper and stainless-steel sections, respectively. The running time on a Pentium 90 MHz PC for the entire boiling curve was 7% of the real transient time and 4% of that of a simplified Gaussian elimination (SGE) method for the Crank-Nicolson scheme
Development of small scale cluster computer for numerical analysis
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
Computer-Aided Numerical Inversion of Laplace Transform
Directory of Open Access Journals (Sweden)
Umesh Kumar
2000-01-01
Full Text Available This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly because of the difficulty in finding the necessary poles and residues. The numerical inversion technique mentioned above does away with the poles and residues but uses precomputed numbers to find the time response. This technique is applicable to the solution of partially differentiable equations and certain classes of linear systems with time varying components.
Numerical simulation of strong evaporation and condensation for plasma-facing materials
International Nuclear Information System (INIS)
Kunugi, T.; Yasuda, H.
1994-01-01
The thermal response of the divertor plate to the hard plasma disruptions had been analyzed numerically by the two dimensional transient heat transfer code. There are several studies of the vapor shielding effects on the thermal response to the plasma disruption. However, it was pointed out some discrepancies among the numerical results calculated by U.S., EC and Japan for the same disruption conditions by van der Laan. One of the authors studied the sensitivity of some parameters (i.e., the temperature dependency of the thermal properties, an evaporation coefficient and a saturated condensation ratio) of disruption erosion analysis. Though the authors expected that the variations in evaporation models lead to the large variety of the erosion, they gave no significant effects on the surface temperature, the evaporation and melt-layer thickness. In this paper, the authors will describe the development of the numerical simulation codes for the strong evaporation and condensation from the plasma facing materials (PFMs) such as carbon, tungsten and beryllium
NUMERICAL ANALYSIS OF AIRFLOW AND METHANE EMITTED FROM THE MINE FACE IN A BLIND DOG HEADING
Directory of Open Access Journals (Sweden)
Jarosław BRODNY
2015-04-01
Full Text Available Ventilation is one of the most common presented problems during the driving of dog headings. During driving such heading has only one connection with air stream routes, which significantly make difficult the process of its ventilation. In a case of its driving in coal in the methane seam, this heading is endangered also to methane emission. In such case process of its ventilation is much more difficult. In the paper there are presented results of numerical analysis of ventila-tion of blind dog headings using air-duct forcing the air into its mine face. The analysis was performed for four different velocities of the air at the outlet from air-duct. The calculations were made for the excavation of heading with heading machine and conveyor belt.
DEFF Research Database (Denmark)
Mortensen, Kristine Køhler; Brotherton, Chloe
2018-01-01
for the face the be put into action. Based on an ethnographic study of Danish teenagers’ use of SnapChat we demonstrate how the face is used as a central medium for interaction with peers. Through the analysis of visual SnapChat messages we investigate how SnapChat requires the sender to put an ‘ugly’ face...... already secured their popular status on the heterosexual marketplace in the broad context of the school. Thus SnapChat functions both as a challenge to beauty norms of ‘flawless faces’ and as a reinscription of these same norms by further manifesting the exclusive status of the popular girl...
A New Language Design for Prototyping Numerical Computation
Directory of Open Access Journals (Sweden)
Thomas Derby
1996-01-01
Full Text Available To naturally and conveniently express numerical algorithms, considerable expressive power is needed in the languages in which they are implemented. The language Matlab is widely used by numerical analysts for this reason. Expressiveness or ease-of-use can also result in a loss of efficiency, as is the case with Matlab. In particular, because numerical analysts are highly interested in the performance of their algorithms, prototypes are still often implemented in languages such as Fortran. In this article we describe a language design that is intended to both provide expressiveness for numerical computation, and at the same time provide performance guarantees. In our language, EQ, we attempt to include both syntactic and semantic features that correspond closely to the programmer's model of the problem, including unordered equations, large-granularity state transitions, and matrix notation. The resulting language does not fit into standard language categories such as functional or imperative but has features of both paradigms. We also introduce the notion of language dependability, which is the idea that a language should guarantee that certain program transformations are performed by all implementations. We first describe the interesting features of EQ, and then present three examples of algorithms written using it. We also provide encouraging performance results from an initial implementation of our language.
International Nuclear Information System (INIS)
Laan, J.G. van der; Akiba, M.; Seki, M.; Hassanein, A.; Tanchuk, V.
1991-01-01
An evaluation is given for the prediction for disruption erosion in the International Thermonuclear Engineering Reactor (ITER). At first, a description is given of the relation between plasma operating paramters and system dimensions to the predictions of loading parameters of Plasma Facing Components (PFC) in off-normal events. Numerical results from ITER parties on the prediction of disruption erosion are compared for a few typical cases and discussed. Apart from some differences in the codes, the observed discrepancies can be ascribed to different input data of material properties and boundary conditions. Some physical models for vapour shielding and their effects on numerical results are mentioned. Experimental results from ITER parties, obtained with electron and laser beams, are also compared. Erosion rates for the candidate ITER PFC materials are shown to depend very strongly on the energy deposition parameters, which are based on plasma physics considerations, and on the assumed material loss mechanisms. Lifetimes estimates for divertor plate and first wall armour are given for carbon, tungsten and beryllium, based on the erosion in the thermal quench phase. (orig.)
A mutually profitable alliance - Asymptotic expansions and numerical computations
Euvrard, D.
Problems including the flow past a wing airfoil at Mach 1, and the two-dimensional flow past a partially immersed body are used to show the advantages of coupling a standard numerical method for the whole domain where everything is of the order of 1, with an appropriate asymptotic expansion in the vicinity of some singular point. Cases more closely linking the two approaches are then considered. In the localized finite element method, the asymptotic expansion at infinity becomes a convergent series and the problem reduces to a variational form. Combined analytical and numerical methods are used in the singularity distribution method and in the various couplings of finite elements and a Green integral representation to design a subroutine to compute the Green function and its derivatives.
Learning SciPy for numerical and scientific computing
Silva
2013-01-01
A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio
Numerical analysis of boosting scheme for scalable NMR quantum computation
International Nuclear Information System (INIS)
SaiToh, Akira; Kitagawa, Masahiro
2005-01-01
Among initialization schemes for ensemble quantum computation beginning at thermal equilibrium, the scheme proposed by Schulman and Vazirani [in Proceedings of the 31st ACM Symposium on Theory of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for the simple quantum circuit to redistribute the biases (polarizations) of qubits and small time complexity. However, our numerical simulation shows that the number of qubits initialized by the scheme is rather smaller than expected from the von Neumann entropy because of an increase in the sum of the binary entropies of individual qubits, which indicates a growth in the total classical correlation. This result--namely, that there is such a significant growth in the total binary entropy--disagrees with that of their analysis
Cabaroglu, Nese; Basaran, Suleyman; Roberts, Jon
2010-01-01
This study compares pauses, repetitions and recasts in matched task interactions under face-to-face and computer-mediated conditions. Six first-year English undergraduates at a Turkish University took part in Skype-based voice chat with a native speaker and face-to-face with their instructor. Preliminary quantitative analysis of transcripts showed…
Weighted Local Active Pixel Pattern (WLAPP for Face Recognition in Parallel Computation Environment
Directory of Open Access Journals (Sweden)
Gundavarapu Mallikarjuna Rao
2013-10-01
Full Text Available Abstract - The availability of multi-core technology resulted totally new computational era. Researchers are keen to explore available potential in state of art-machines for breaking the bearer imposed by serial computation. Face Recognition is one of the challenging applications on so ever computational environment. The main difficulty of traditional Face Recognition algorithms is lack of the scalability. In this paper Weighted Local Active Pixel Pattern (WLAPP, a new scalable Face Recognition Algorithm suitable for parallel environment is proposed. Local Active Pixel Pattern (LAPP is found to be simple and computational inexpensive compare to Local Binary Patterns (LBP. WLAPP is developed based on concept of LAPP. The experimentation is performed on FG-Net Aging Database with deliberately introduced 20% distortion and the results are encouraging. Keywords — Active pixels, Face Recognition, Local Binary Pattern (LBP, Local Active Pixel Pattern (LAPP, Pattern computing, parallel workers, template, weight computation.
Exploring the marketing challenges faced by assembled computer dealers
Kallimani, Rashmi
2010-01-01
There has been a great competition in computer market these days for obtaining higher market share. Computer market consisting of many branded and non branded players have been using various methods for matching the supply and demand in best possible way for attaining market dominance. Branded companies are seen to be investing large amount in aggressive marketing techniques for reaching the customers and obtaining higher market share. Due to this many small companies and non branded computer...
A numerical method to compute interior transmission eigenvalues
International Nuclear Information System (INIS)
Kleefeld, Andreas
2013-01-01
In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)
Computational techniques for inelastic analysis and numerical experiments
International Nuclear Information System (INIS)
Yamada, Y.
1977-01-01
A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)
Numerical simulation of NQR/NMR: Applications in quantum computing.
Possa, Denimar; Gaudio, Anderson C; Freitas, Jair C C
2011-04-01
A numerical simulation program able to simulate nuclear quadrupole resonance (NQR) as well as nuclear magnetic resonance (NMR) experiments is presented, written using the Mathematica package, aiming especially applications in quantum computing. The program makes use of the interaction picture to compute the effect of the relevant nuclear spin interactions, without any assumption about the relative size of each interaction. This makes the program flexible and versatile, being useful in a wide range of experimental situations, going from NQR (at zero or under small applied magnetic field) to high-field NMR experiments. Some conditions specifically required for quantum computing applications are implemented in the program, such as the possibility of use of elliptically polarized radiofrequency and the inclusion of first- and second-order terms in the average Hamiltonian expansion. A number of examples dealing with simple NQR and quadrupole-perturbed NMR experiments are presented, along with the proposal of experiments to create quantum pseudopure states and logic gates using NQR. The program and the various application examples are freely available through the link http://www.profanderson.net/files/nmr_nqr.php. Copyright © 2011 Elsevier Inc. All rights reserved.
Numerical evaluation of methods for computing tomographic projections
International Nuclear Information System (INIS)
Zhuang, W.; Gopal, S.S.; Hebert, T.J.
1994-01-01
Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived
Cloud computing. Legal framework and enterprises facing a controversial phenomenon.
De Vivo, Maria Concetta
2015-01-01
Il contributo analizza il tema del cloud computing dal punto di vista giuridico con specifica attenzione per la legislazione europea e nazionale. Gli aspetti affrontati riguardano la privacy, la sicurezza dei dati, il nodo delle responsabilità nel trattamento dei dati e nelle attività di archiviazione. Tratta anche il problema complesso degli accordi di servizio. In this paper the author analyzes the cloud computing and related juridical issues with specific reference to the European and n...
Numerical simulation of complex turbulent Flow over a backward-facing step
International Nuclear Information System (INIS)
Silveira Neto, A.
1991-06-01
A statistical and topological study of a complex turbulent flow over a backward-facing step is realized by means of Direct and Large-Eddy Simulations. Direct simulations are performed in an isothermal and in a stratified two-dimensional case. In the isothermal case coherent structures have been obtained by the numerical simulation in the mixing layer downstream of the step. In a second step a thermal stratification is imposed on this flow. The coherent structures are in this case produced in the immediate vicinity of the step and disappear dowstream for increasing stratification. Afterwards, large-eddy simulations are carried out in the three-dimensional case. The subgrid-scale model is a local adaptation to the physical space of the spectral eddy-viscosity concept. The statistics of turbulence are in good agreement with the experimental data, corresponding to a small step configuration. Furthermore, calculations at higher step configuration show that the eddy structure of the flow presents striking analogies with the plane shear layers, with large billows shed behind the step, and intense longitudinal vortices strained between these billows [fr
Face to Face In-vitro to In-silico – How Computers are Arming Biology!
Indian Academy of Sciences (India)
ment of novel computational methods to understand the structure and functions of .... to pairwise alignment (the query is aligned only to the best match in the database), where the ... Andrej Sali, while he was a doctorate student at Prof. Sir Tom ...
Summary of research in applied mathematics, numerical analysis, and computer sciences
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
Singularities of robot mechanisms numerical computation and avoidance path planning
Bohigas, Oriol; Ros, Lluís
2017-01-01
This book presents the singular configurations associated with a robot mechanism, together with robust methods for their computation, interpretation, and avoidance path planning. Having such methods is essential as singularities generally pose problems to the normal operation of a robot, but also determine the workspaces and motion impediments of its underlying mechanical structure. A distinctive feature of this volume is that the methods are applicable to nonredundant mechanisms of general architecture, defined by planar or spatial kinematic chains interconnected in an arbitrary way. Moreover, singularities are interpreted as silhouettes of the configuration space when seen from the input or output spaces. This leads to a powerful image that explains the consequences of traversing singular configurations, and all the rich information that can be extracted from them. The problems are solved by means of effective branch-and-prune and numerical continuation methods that are of independent interest in themselves...
Numerical demonstration of neuromorphic computing with photonic crystal cavities.
Laporte, Floris; Katumba, Andrew; Dambre, Joni; Bienstman, Peter
2018-04-02
We propose a new design for a passive photonic reservoir computer on a silicon photonics chip which can be used in the context of optical communication applications, and study it through detailed numerical simulations. The design consists of a photonic crystal cavity with a quarter-stadium shape, which is known to foster interesting mixing dynamics. These mixing properties turn out to be very useful for memory-dependent optical signal processing tasks, such as header recognition. The proposed, ultra-compact photonic crystal cavity exhibits a memory of up to 6 bits, while simultaneously accepting bitrates in a wide region of operation. Moreover, because of the inherent low losses in a high-Q photonic crystal cavity, the proposed design is very power efficient.
Computer simulations and the changing face of scientific experimentation
Duran, Juan M
2013-01-01
Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi
CDIAC catalog of numeric data packages and computer model packages
International Nuclear Information System (INIS)
Boden, T.A.; Stoss, F.W.
1993-05-01
The Carbon Dioxide Information Analysis Center acquires, quality-assures, and distributes to the scientific community numeric data packages (NDPs) and computer model packages (CMPs) dealing with topics related to atmospheric trace-gas concentrations and global climate change. These packages include data on historic and present atmospheric CO 2 and CH 4 concentrations, historic and present oceanic CO 2 concentrations, historic weather and climate around the world, sea-level rise, storm occurrences, volcanic dust in the atmosphere, sources of atmospheric CO 2 , plants' response to elevated CO 2 levels, sunspot occurrences, and many other indicators of, contributors to, or components of climate change. This catalog describes the packages presently offered by CDIAC, reviews the processes used by CDIAC to assure the quality of the data contained in these packages, notes the media on which each package is available, describes the documentation that accompanies each package, and provides ordering information. Numeric data are available in the printed NDPs and CMPs, in CD-ROM format, and from an anonymous FTP area via Internet. All CDIAC information products are available at no cost
Zimmerman, D P
1987-01-01
This study analyzes the content of communications among 18 severely disturbed adolescents. Interactions were recorded from two sources: computer-based "conferences" for the group, and small group face-to-face sessions which addressed similar topics. The purpose was to determine whether there are important differences in indications of psychological state, interpersonal interest, and expressive style. The research was significant, given the strong attraction of computers to many adolescents and the paucity of research on social-psychological effects of this technology. A content analysis based on a total sample of 10,224 words was performed using the Harvard IV Psychosociological Dictionary. Results indicated that computer-mediated communication was more expressive of feelings and made more frequent mention of interpersonal issues. Further, it displayed a more positive object-relations stance, was less negative in expressive style, and appeared to diminish certain traditional gender differences in group communication. These findings suggest that the computer may have an interesting adjunct role to play in reducing communication deficits commonly observed in severely disturbed adolescent clinical populations.
DEFF Research Database (Denmark)
Skovgaard, M.; Nielsen, Peter V.
In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...
Fauzi, Ahmad
2017-11-01
Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.
Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.
2006-01-01
Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…
Numerical and analytical solutions for problems relevant for quantum computers
International Nuclear Information System (INIS)
Spoerl, Andreas
2008-01-01
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Computer prediction of subsurface radionuclide transport: an adaptive numerical method
International Nuclear Information System (INIS)
Neuman, S.P.
1983-01-01
Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1
Directory of Open Access Journals (Sweden)
Raluca Hohan
2010-01-01
Full Text Available Sandwich panels are remarkable products because they can be as strong as a solid material but with less weight. The analysis that is required to predict the stresses and deflections in panels with flat or lightly profiled facings is that of conventional beam theory but with the addition of shear deformation. Knowing that the profiled sheets bring an increase of the flexural stiffness, formulas showing the calculus of a panel with flat and profiled facings are established. A comparison between the results of a mathematical calculus, an experimental test and a numerical modelling is provided.
Risk in the Clouds?: Security Issues Facing Government Use of Cloud Computing
Wyld, David C.
Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.
Investigation of turbulent boundary layer over forward-facing step via direct numerical simulation
International Nuclear Information System (INIS)
Hattori, Hirofumi; Nagano, Yasutaka
2010-01-01
This paper presents observations and investigations of the detailed turbulent structure of a boundary layer over a forward-facing step. The present DNSs are conducted under conditions with three Reynolds numbers based on step height, or three Reynolds numbers based on momentum thickness so as to investigate the effects of step height and inlet boundary layer thickness. DNS results show the quantitative turbulent statistics and structures of boundary layers over a forward-facing step, where pronounced counter-gradient diffusion phenomena (CDP) are especially observed on the step near the wall. Also, a quadrant analysis is conducted in which the results indicate in detail the turbulence motion around the step.
Numerical computation of fluid flow in different nonferrous metallurgical reactors
International Nuclear Information System (INIS)
Lackner, A.
1996-10-01
Heat, mass and fluid flow phenomena in metallurgical reactor systems such as smelting cyclones or electrolytic cells are complex and intricately linked through the governing equations of fluid flow, chemical reaction kinetics and chemical thermodynamics. The challenges for the representation of flow phenomena in such reactors as well as the transfers of these concepts to non-specialist modelers (e.g. plant operators and management personnel) can be met through scientific flow visualization techniques. In the first example the fluid flow of the gas phase and of concentrate particles in a smelting cyclone for copper production are calculated three dimensionally. The effect of design parameters (length and diameter of reactor, concentrate feeding tangentially or from the top, ..) and operating conditions are investigated. Single particle traces show, how to increase particle retention time before the particles reach the liquid film flowing down the cyclone wall. Cyclone separators are widely used in the metallurgical and chemical industry for collection of large quantities of dust. Most of the empirical models, which today are applied for the design, are lacking in being valid in the high temperature region. Therefore the numerical prediction of the collection efficiency of dust particles is done. The particle behavior close to the wall is considered by applying a particle restitution model, which calculates individual particle restitution coefficients as functions of impact velocity and impact angle. The effect of design parameters and operating are studied. Moreover, the fluid flow inside a copper refining electrolysis cell is modeled. The simulation is based on density variations in the boundary layer at the electrode surface. Density and thickness of the boundary layer are compared to measurements in a parametric study. The actual inhibitor concentration in the cell is calculated, too. Moreover, a two-phase flow approach is developed to simulate the behavior of
Directory of Open Access Journals (Sweden)
Kevin eCorti
2015-05-01
Full Text Available We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots become hybrid agents (echoborgs capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg neither sensed nor suspected a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human-computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.
Daniels, Mindy A.
2012-01-01
The purpose of this case study was to compare the pedagogical and affective efficiency and efficacy of creative prose fiction writing workshops taught via asynchronous computer-mediated online distance education with creative prose fiction writing workshops taught face-to-face in order to better understand their operational pedagogy and…
Numerical simulation of flows over 2D and 3D backward-facing inclined steps
Czech Academy of Sciences Publication Activity Database
Louda, Petr; Příhoda, Jaromír; Kozel, K.; Sváček, P.
2013-01-01
Roč. 43, October (2013), s. 268-276 ISSN 0142-727X R&D Projects: GA ČR GAP101/10/1230; GA ČR GA103/09/0977 Institutional support: RVO:61388998 Keywords : backward-facing step * EARSM turbulence model * one-sided diffuser Subject RIV: BK - Fluid Dynamics Impact factor: 1.777, year: 2013 http://www.sciencedirect.com/science/article/pii/S0142727X13001409
John Fairweather PhD; Tiffany Rinne PhD; Gary Steel PhD
2012-01-01
This article reports results from research on cultural models, and assesses the effects of computers on data quality by comparing open-ended questions asked in two formats—face-to-face interviewing (FTFI) and computer-assisted, self-interviewing (CASI). We expected that for our non-sensitive topic, FTFI would generate fuller and richer accounts because the interviewer could facilitate the interview process. Although the interviewer indeed facilitated these interviews, which resulted in more w...
Hakimeh Shahrokhi Mehr; Masoud Zoghi; Nader Assadi
2013-01-01
The traditional form of teaching speaking skill has been via face-to-face (FTF) interaction in the classroom setting. Today in the computer age, the on-line forum can provide a virtual environment for differential communication. The pedagogical system benefits from such technology improvement for teaching foreign languages. This quasi-experimental research aimed at comparing the effects of two instructional strategies: synchronous computer-mediated communication (SCMC) and FTF interaction. Fo...
Control rod computer code IAMCOS: general theory and numerical methods
International Nuclear Information System (INIS)
West, G.
1982-11-01
IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr
Three-Dimensional Computer-Assisted Two-Layer Elastic Models of the Face.
Ueda, Koichi; Shigemura, Yuka; Otsuki, Yuki; Fuse, Asuka; Mitsuno, Daisuke
2017-11-01
To make three-dimensional computer-assisted elastic models for the face, we decided on five requirements: (1) an elastic texture like skin and subcutaneous tissue; (2) the ability to take pen marking for incisions; (3) the ability to be cut with a surgical knife; (4) the ability to keep stitches in place for a long time; and (5) a layered structure. After testing many elastic solvents, we have made realistic three-dimensional computer-assisted two-layer elastic models of the face and cleft lip from the computed tomographic and magnetic resonance imaging stereolithographic data. The surface layer is made of polyurethane and the inner layer is silicone. Using this elastic model, we taught residents and young doctors how to make several typical local flaps and to perform cheiloplasty. They could experience realistic simulated surgery and understand three-dimensional movement of the flaps.
Numerical computation of space shuttle orbiter flow field
Tannehill, John C.
1988-01-01
A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.
Benchmark Numerical Toolkits for High Performance Computing, Phase I
National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...
An efficient ERP-based brain-computer interface using random set presentation and face familiarity.
Directory of Open Access Journals (Sweden)
Seul-Ki Yeom
Full Text Available Event-related potential (ERP-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC-based paradigm with our approach that combines a random set presentation paradigm with (non- self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup.
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Three numerical methods for the computation of the electrostatic energy
International Nuclear Information System (INIS)
Poenaru, D.N.; Galeriu, D.
1975-01-01
The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended
Numerical Computational Technique for Scattering from Underwater Objects
T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar
2013-01-01
This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...
Pulse cleaning flow models and numerical computation of candle ceramic filters.
Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang
2002-04-01
Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.
On Numerical Stability in Large Scale Linear Algebraic Computations
Czech Academy of Sciences Publication Activity Database
Strakoš, Zdeněk; Liesen, J.
2005-01-01
Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005
Numerical simulation of information recovery in quantum computers
International Nuclear Information System (INIS)
Salas, P.J.; Sanz, A.L.
2002-01-01
Decoherence is the main problem to be solved before quantum computers can be built. To control decoherence, it is possible to use error correction methods, but these methods are themselves noisy quantum computation processes. In this work, we study the ability of Steane's and Shor's fault-tolerant recovering methods, as well as a modification of Steane's ancilla network, to correct errors in qubits. We test a way to measure correctly ancilla's fidelity for these methods, and state the possibility of carrying out an effective error correction through a noisy quantum channel, even using noisy error correction methods
International Nuclear Information System (INIS)
El-Osery, I.A.
1981-01-01
The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented
Directory of Open Access Journals (Sweden)
Zhitao Zheng
2015-11-01
Full Text Available Sudden falls of large-area hard roofs in a mined area release a large amount of elastic energy, generate dynamic loads, and cause disasters such as impact ground pressure and gas outbursts. To address these problems, in this study, the sleeve fracturing method (SFM was applied to weaken a hard roof. The numerical simulation software FLAC3D was used to develop three models based on an analysis of the SFM working mechanism. These models were applied to an analysis of the fracturing effects of various factors such as the borehole diameter, hole spacing, and sleeve pressure. Finally, the results of a simulation were validated using experiments with similar models. Our research indicated the following: (1 The crack propagation directions in the models were affected by the maximum principal stress and hole spacing. When the borehole diameter was fixed, the fracturing pressure increased with increasing hole spacing. In contrast, when the fracturing pressure was fixed, the fracturing range increased with increasing borehole diameter; (2 The most ideal fracturing effect was found at a fracturing pressure of 17.6 MPa in the model with a borehole diameter of 40 mm and hole spacing of 400 mm. The results showed that it is possible to regulate the falls of hard roofs using the SFM. This research may provide a theoretical basis for controlling hard roofs in mining.
Numerical computation of FCT equilibria by inverse equilibrium method
International Nuclear Information System (INIS)
Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki
1986-11-01
FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)
Improved methods for computing masses from numerical simulations
Energy Technology Data Exchange (ETDEWEB)
Kronfeld, A.S.
1989-11-22
An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
NUMERICAL COMPUTATION AND PREDICTION OF ELECTRICITY CONSUMPTION IN TOBACCO INDUSTRY
Directory of Open Access Journals (Sweden)
Mirjana Laković
2017-12-01
Full Text Available Electricity is a key energy source in each country and an important condition for economic development. It is necessary to use modern methods and tools to predict energy consumption for different types of systems and weather conditions. In every industrial plant, electricity consumption presents one of the greatest operating costs. Monitoring and forecasting of this parameter provide the opportunity to rationalize the use of electricity and thus significantly reduce the costs. The paper proposes the prediction of energy consumption by a new time-series model. This involves time series models using a set of previously collected data to predict the future load. The most commonly used linear time series models are the AR (Autoregressive Model, MA (Moving Average and ARMA (Autoregressive Moving Average Model. The AR model is used in this paper. Using the AR (Autoregressive Model model, the Monte Carlo simulation method is utilized for predicting and analyzing the energy consumption change in the considered tobacco industrial plant. One of the main parts of the AR model is a seasonal pattern that takes into account the climatic conditions for a given geographical area. This part of the model was delineated by the Fourier transform and was used with the aim of avoiding the model complexity. As an example, the numerical results were performed for tobacco production in one industrial plant. A probabilistic range of input values is used to determine the future probabilistic level of energy consumption.
International Nuclear Information System (INIS)
Kako, T.; Watanabe, T.
1999-04-01
This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)
Energy Technology Data Exchange (ETDEWEB)
Kako, T.; Watanabe, T. [eds.
1999-04-01
This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)
Minimal features of a computer and its basic software to executs NEPTUNIX 2 numerical step
International Nuclear Information System (INIS)
Roux, Pierre.
1982-12-01
NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equation set F(x,x',t,l) = 0, where t is the independent variable, x' the derivative of x and l an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The non numerical step must be executed on a series 370 IBM computer or a compatible computer. This step generates a FORTRAN language model picture fitted for the computer carrying out the numerical step. The numerical step consists in building and running a mathematical model simulator. This execution step of NEPTUNIX 2 has been designed in order to be transportable on many computers. The present manual describes minimal features of such host computer used for executing the NEPTUNIX 2 numerical step [fr
Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae
2012-04-01
The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.
Numerical computing of elastic homogenized coefficients for periodic fibrous tissue
Directory of Open Access Journals (Sweden)
Roman S.
2009-06-01
Full Text Available The homogenization theory in linear elasticity is applied to a periodic array of cylindrical inclusions in rectangular pattern extending to infinity in the inclusions axial direction, such that the deformation of tissue along this last direction is negligible. In the plane of deformation, the homogenization scheme is based on the average strain energy whereas in the third direction it is based on the average normal stress along this direction. Namely, these average quantities have to be the same on a Repeating Unit Cell (RUC of heterogeneous and homogenized media when using a special form of boundary conditions forming by a periodic part and an affine part of displacement. It exists an infinity of RUCs generating the considered array. The computing procedure is tested with different choices of RUC to control that the results of the homogenization process are independent of the kind of RUC we employ. Then, the dependence of the homogenized coefficients on the microstructure can be studied. For instance, a special anisotropy and the role of the inclusion volume are investigated. In the second part of this work, mechanical traction tests are simulated. We consider two kinds of loading, applying a density of force or imposing a displacement. We test five samples of periodic array containing one, four, sixteen, sixty-four and one hundred of RUCs. The evolution of mean stresses, strains and energy with the numbers of inclusions is studied. Evolutions depend on the kind of loading, but not their limits, which could be predicted by simulating traction test of the homogenized medium.
Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.
2007-01-01
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the
A virtual component method in numerical computation of cascades for isotope separation
International Nuclear Information System (INIS)
Zeng Shi; Cheng Lu
2014-01-01
The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)
Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick
2015-01-01
This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…
Santen, van R.A.; Boersma, M.A.M.
1974-01-01
The regular solution model is used to compute the surface enrichment in the (111)- and (100)-faces of silver-gold alloys. Surface enrichment by silver is predicted to increase if the surface plane becomes less saturated and decreases if one raises the temperature. The possible implications of these
Fincher, Danielle; VanderEnde, Kristin; Colbert, Kia; Houry, Debra; Smith, L Shakiyla; Yount, Kathryn M
2015-03-01
African American women in the United States report intimate partner violence (IPV) more often than the general population of women. Overall, women underreport IPV because of shame, embarrassment, fear of retribution, or low expectation of legal support. African American women may be especially unlikely to report IPV because of poverty, low social support, and past experiences of discrimination. The purpose of this article is to determine the context in which low-income African American women disclose IPV. Consenting African American women receiving Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) services in WIC clinics were randomized to complete an IPV screening (Revised Conflict Tactics Scales-Short Form) via computer-assisted self-interview (CASI) or face-to-face interview (FTFI). Women (n = 368) reported high rates of lifetime and prior-year verbal (48%, 34%), physical (12%, 7%), sexual (10%, 7%), and any (49%, 36%) IPV, as well as IPV-related injury (13%, 7%). Mode of screening, but not interviewer race, affected disclosure. Women screened via FTFI reported significantly more lifetime and prior-year negotiation (adjusted odds ratio [aOR] = 10.54, 3.97) and more prior-year verbal (aOR = 2.10), sexual (aOR = 4.31), and any (aOR = 2.02) IPV than CASI-screened women. African American women in a WIC setting disclosed IPV more often in face-to-face than computer screening, and race-matching of client and interviewer did not affect disclosure. Findings highlight the potential value of face-to-face screening to identify women at risk of IPV. Programs should weigh the costs and benefits of training staff versus using computer-based technologies to screen for IPV in WIC settings. © The Author(s) 2014.
Numerical models for computation of pollutant-dispersion in the atmosphere
International Nuclear Information System (INIS)
Leder, S.M.; Biesemann-Krueger, A.
1985-04-01
The report describes some models which are used to compute the concentration of emitted pollutants in the lower atmosphere. A dispersion model, developed at the University of Hamburg, is considered in more detail and treated with two different numerical methods. The convergence of the methods is investigated and a comparison of numerical results and dispersion experiments carried out at the Nuclear Research Center Karlsruhe is given. (orig.) [de
International Nuclear Information System (INIS)
Chernyshenko, Dmitri; Fangohr, Hans
2015-01-01
In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges
Numerical computation of soliton dynamics for NLS equations in a driving potential
Directory of Open Access Journals (Sweden)
Marco Caliari
2010-06-01
Full Text Available We provide numerical computations for the soliton dynamics of the nonlinear Schrodinger equation with an external potential. After computing the ground state solution r of a related elliptic equation we show that, in the semi-classical regime, the center of mass of the solution with initial datum built upon r is driven by the solution to $ddot x=- abla V(x$. Finally, we provide examples and analyze the numerical errors in the two dimensional case when V is a harmonic potential.
International Nuclear Information System (INIS)
Kako, T.; Watanabe, T.
2000-06-01
This is the proceeding of 'study on numerical methods related to plasma confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. There are also various lectures on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. Separate abstracts were presented for 13 of the papers in this report. The remaining 6 were considered outside the subject scope of INIS. (J.P.N.)
Vectorization on the star computer of several numerical methods for a fluid flow problem
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science
International Nuclear Information System (INIS)
Pop, Florin
2014-01-01
Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.
van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten
2009-12-01
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the
An approach to first principles electronic structure calculation by symbolic-numeric computation
Directory of Open Access Journals (Sweden)
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
Directory of Open Access Journals (Sweden)
M. Boumaza
2015-07-01
Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.
Directory of Open Access Journals (Sweden)
John Fairweather PhD
2012-07-01
Full Text Available This article reports results from research on cultural models, and assesses the effects of computers on data quality by comparing open-ended questions asked in two formats—face-to-face interviewing (FTFI and computer-assisted, self-interviewing (CASI. We expected that for our non-sensitive topic, FTFI would generate fuller and richer accounts because the interviewer could facilitate the interview process. Although the interviewer indeed facilitated these interviews, which resulted in more words in less time, the number of underlying themes found within the texts for each interview mode was the same, thus resulting in the same models of national culture and innovation being built for each mode. Our results, although based on an imperfect research design, suggest that CASI can be beneficial when using open-ended questions because CASI is easy to administer, capable of reaching more efficiently a large sample, and able to avoid the need to transcribe the recorded responses.
Saltarelli, Andrew John
2012-01-01
Previous research suggests asynchronous online computer-mediated communication (CMC) has deleterious effects on certain cooperative learning pedagogies (e.g., constructive controversy), but the processes underlying this effect and how it may be ameliorated remain unclear. This study tests whether asynchronous CMC thwarts belongingness needs…
Research in progress in applied mathematics, numerical analysis, and computer science
1990-01-01
Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.
Transfer of numeric ASCII data files between Apple and IBM personal computers.
Allan, R W; Bermejo, R; Houben, D
1986-01-01
Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.
CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1970-01-01
The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.
International Nuclear Information System (INIS)
Herrmann, H.J.
1989-01-01
Electrical conductivity diffusion or phonons, have an anomalous behaviour on percolation clusters at the percolation threshold due to the fractality of these clusters. The results that have been found numerically for this anomalous behaviour are reviewed. A special purpose computer built for this purpose is described and the evaluation of the data from this machine is discussed
Skowronski, Steven D.
This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…
CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.
Skowronski, Steven D.; Tatum, Kenneth
This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…
Stanton, Michael; And Others
1985-01-01
Three reports on the effects of high technology on the nature of work include (1) Stanton on applications and implications of computer-aided design for engineers, drafters, and architects; (2) Nardone on the outlook and training of numerical-control machine tool operators; and (3) Austin and Drake on the future of clerical occupations in automated…
Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field
Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie
2018-04-01
A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
Alrashed, Abdullah A. A. A.; Akbari, Omid Ali; Heydari, Ali; Toghraie, Davood; Zarringhalam, Majid; Shabani, Gholamreza Ahmadi Sheikh; Seifi, Ali Reza; Goodarzi, Marjan
2018-05-01
In recent years, the study of rheological behavior and heat transfer of nanofluids in the industrial equipment has become widespread among the researchers and their results have led to great advancements in this field. In present study, the laminar flow and heat transfer of water/functional multi-walled carbon nanotube nanofluid have been numerically investigated in weight percentages of 0.00, 0.12 and 0.25 and Reynolds numbers of 1-150 by using finite volume method (FVM). The analyzed geometry is a two-dimensional backward-facing contracting channel and the effects of various weight percentages and Reynolds numbers have been studied in the supposed geometry. The results have been interpreted as the figures of Nusselt number, friction coefficient, pressure drop, velocity contours and static temperature. The results of this research indicate that, the enhancement of Reynolds number or weight percentage of nanoparticles causes the reduction of surface temperature and the enhancement of heat transfer coefficient. By increasing Reynolds number, the axial velocity enhances, causing the enhancement of momentum. By increasing fluid momentum at the beginning of channel, especially in areas close to the upper wall, the axial velocity reduces and the possibility of vortex generation increases. The mentioned behavior causes a great enhancement in velocity gradients and pressure drop at the inlet of channel. Also, in these areas, Nusselt number and local friction coefficient figures have a relative decline, which is due to the sudden reduction of velocity. In general, by increasing the mass fraction of solid nanoparticles, the average Nusselt number increases and in Reynolds number of 150, the enhancement of pumping power and pressure drop does not cause any significant changes. This behavior is an important advantage of choosing nanofluid which causes the enhancement of thermal efficiency.
Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing
International Nuclear Information System (INIS)
Bailey, David
2005-01-01
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the
Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr
2018-01-01
Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.
International Nuclear Information System (INIS)
Garratt, T.J.
1989-05-01
Safety assessments of radioactive waste disposal require efficient computer models for the important processes. The present paper is based on an efficient computational technique which can be used to solve a wide variety of safety assessment models. It involves the numerical inversion of analytical solutions to the Laplace-transformed differential equations using a method proposed by Talbot. This method has been implemented on a personal computer in a user-friendly manner. The steps required to implement a particular transform and run the program are outlined. Four examples are described which illustrate the flexibility, accuracy and efficiency of the program. The improvements in computational efficiency described in this paper have application to the probabilistic safety assessment codes ESCORT and MASCOT which are currently under development. Also, it is hoped that the present work will form the basis of software for personal computers which could be used to demonstrate safety assessment procedures to a wide audience. (author)
Mukhadiyev, Nurzhan
2017-05-01
Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period October 1, 1983 through March 31, 1984 is summarized.
1989-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized.
1992-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, fluid mechanics including fluid dynamics, acoustics, and combustion, aerodynamics, and computer science during the period 1 Apr. 1992 - 30 Sep. 1992 is summarized.
Re-Computation of Numerical Results Contained in NACA Report No. 496
Perry, Boyd, III
2015-01-01
An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.
Piv Method and Numerical Computation for Prediction of Liquid Steel Flow Structure in Tundish
Directory of Open Access Journals (Sweden)
Cwudziński A.
2015-04-01
Full Text Available This paper presents the results of computer simulations and laboratory experiments carried out to describe the motion of steel flow in the tundish. The facility under investigation is a single-nozzle tundish designed for casting concast slabs. For the validation of the numerical model and verification of the hydrodynamic conditions occurring in the examined tundish furniture variants, obtained from the computer simulations, a physical model of the tundish was employed. State-of-the-art vector flow field analysis measuring systems developed by Lavision were used in the laboratory tests. Computer simulations of liquid steel flow were performed using the commercial program Ansys-Fluent¯. In order to obtain a complete hydrodynamic picture in the tundish furniture variants tested, the computer simulations were performed for both isothermal and non-isothermal conditions.
Directory of Open Access Journals (Sweden)
Hakimeh Shahrokhi Mehr
2013-09-01
Full Text Available The traditional form of teaching speaking skill has been via face-to-face (FTF interaction in the classroom setting. Today in the computer age, the on-line forum can provide a virtual environment for differential communication. The pedagogical system benefits from such technology improvement for teaching foreign languages. This quasi-experimental research aimed at comparing the effects of two instructional strategies: synchronous computer-mediated communication (SCMC and FTF interaction. For this purpose, 60 EFL learners were selected from a private language institute as the control (n=30 and experimental (n=30 groups. A speaking test, designed by Hughes (2003, was administered as pretest and after a 12-session treatment the same test was administered as the posttest. The result obtained showed that participants taught based on SCMC fared better than those that were taught according to FTF interaction. Based on the findings of the current study, it is recommended that EFL teachers incorporate computer-mediated communication into their pedagogical procedures.
The Preliminary Study for Numerical Computation of 37 Rod Bundle in CANDU Reactor
International Nuclear Information System (INIS)
Jeon, Yu Mi; Bae, Jun Ho; Park, Joo Hwan
2010-01-01
A typical CANDU 6 fuel bundle consists of 37 fuel rods supported by two endplates and separated by spacer pads at various locations. In addition, the bearing pads are brazed to each outer fuel rod with the aim of reducing the contact area between the fuel bundle and the pressure tube. Although the recent progress of CFD methods has provided opportunities for computing the thermal-hydraulic phenomena inside of a fuel channel, it is yet impossible to reflect the detailed shape of rod bundle on the numerical computation due to a lot of computing mesh and memory capacity. Hence, the previous studies conducted a numerical computation for smooth channels without considering spacers, bearing pads. But, it is well known that these components are an important factor to predict the pressure drop and heat transfer rate in a channel. In this study, the new computational method is proposed to solve the complex geometry such as a fuel rod bundle. In front of applying the method to the problem of 37 rod bundle, the validity and the accuracy of the method are tested by applying the method to the simple geometry. Based on the present result, the calculation for the fully shaped 37-rod bundle is scheduled for the future works
The numerical computation of seismic fragility of base-isolated Nuclear Power Plants buildings
International Nuclear Information System (INIS)
Perotti, Federico; Domaneschi, Marco; De Grandis, Silvia
2013-01-01
Highlights: • Seismic fragility of structural components in base isolated NPP is computed. • Dynamic integration, Response Surface, FORM and Monte Carlo Simulation are adopted. • Refined approach for modeling the non-linearities behavior of isolators is proposed. • Beyond-design conditions are addressed. • The preliminary design of the isolated IRIS is the application of the procedure. -- Abstract: The research work here described is devoted to the development of a numerical procedure for the computation of seismic fragilities for equipment and structural components in Nuclear Power Plants; in particular, reference is made, in the present paper, to the case of isolated buildings. The proposed procedure for fragility computation makes use of the Response Surface Methodology to model the influence of the random variables on the dynamic response. To account for stochastic loading, the latter is computed by means of a simulation procedure. Given the Response Surface, the Monte Carlo method is used to compute the failure probability. The procedure is here applied to the preliminary design of the Nuclear Power Plant reactor building within the International Reactor Innovative and Secure international project; the building is equipped with a base isolation system based on the introduction of High Damping Rubber Bearing elements showing a markedly non linear mechanical behavior. The fragility analysis is performed assuming that the isolation devices become the critical elements in terms of seismic risk and that, once base-isolation is introduced, the dynamic behavior of the building can be captured by low-dimensional numerical models
Human-computer interfaces applied to numerical solution of the Plateau problem
Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério
2015-09-01
In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry; Kasimov, Aslan R.
2018-01-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry I.
2017-12-08
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
Kabanov, Dmitry
2018-03-20
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
International Nuclear Information System (INIS)
Hofland, G.S.; Barton, C.C.
1990-01-01
The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig
Numerical computation of solar neutrino flux attenuated by the MSW mechanism
Kim, Jai Sam; Chae, Yoon Sang; Kim, Jung Dae
1999-07-01
We compute the survival probability of an electron neutrino in its flight through the solar core experiencing the Mikheyev-Smirnov-Wolfenstein effect with all three neutrino species considered. We adopted a hybrid method that uses an accurate approximation formula in the non-resonance region and numerical integration in the non-adiabatic resonance region. The key of our algorithm is to use the importance sampling method for sampling the neutrino creation energy and position and to find the optimum radii to start and stop numerical integration. We further developed a parallel algorithm for a message passing parallel computer. By using an idea of job token, we have developed a dynamical load balancing mechanism which is effective under any irregular load distributions
International Nuclear Information System (INIS)
Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P
2016-01-01
Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)
Mittra, R.; Rushdi, A.
1979-01-01
An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.
Carey, Kate B; Carey, Michael P; Henson, James M; Maisto, Stephen A; DeMartini, Kelly S
2011-03-01
College students who violate alcohol policies are often mandated to participate in alcohol-related interventions. This study investigated (i) whether such interventions reduced drinking beyond the sanction alone, (ii) whether a brief motivational intervention (BMI) was more efficacious than two computer-delivered interventions (CDIs) and (iii) whether intervention response differed by gender. Randomized controlled trial with four conditions [brief motivation interventions (BMI), Alcohol 101 Plus™, Alcohol Edu for Sanctions(®), delayed control] and four assessments (baseline, 1, 6 and 12 months). Private residential university in the United States. Students (n = 677; 64% male) who had violated campus alcohol policies and were sanctioned to participate in a risk reduction program. Consumption (drinks per heaviest and typical week, heavy drinking frequency, peak and typical blood alcohol concentration), alcohol problems and recidivism. Piecewise latent growth models characterized short-term (1-month) and longer-term (1-12 months) change. Female but not male students reduced drinking and problems in the control condition. Males reduced drinking and problems after all interventions relative to control, but did not maintain these gains. Females reduced drinking to a greater extent after a BMI than after either CDI, and maintained reductions relative to baseline across the follow-up year. No differences in recidivism were found. Male and female students responded differently to sanctions for alcohol violations and to risk reduction interventions. BMIs optimized outcomes for both genders. Male students improved after all interventions, but female students improved less after CDIs than after BMI. Intervention effects decayed over time, especially for males. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.
CSIR Research Space (South Africa)
Wilke, DN
2012-07-01
Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...
Coupling artificial intelligence and numerical computation for engineering design (Invited paper)
Tong, S. S.
1986-01-01
The possibility of combining artificial intelligence (AI) systems and numerical computation methods for engineering designs is considered. Attention is given to three possible areas of application involving fan design, controlled vortex design of turbine stage blade angles, and preliminary design of turbine cascade profiles. Among the AI techniques discussed are: knowledge-based systems; intelligent search; and pattern recognition systems. The potential cost and performance advantages of an AI-based design-generation system are discussed in detail.
Invariant visual object and face recognition: neural and computational bases, and a model, VisNet
Directory of Open Access Journals (Sweden)
Edmund T eRolls
2012-06-01
Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
Energy Technology Data Exchange (ETDEWEB)
Sugandhi, Ritesh, E-mail: ritesh@ipr.res.in; Swamy, Rajamannar, E-mail: rajamannar@ipr.res.in; Khirwadkar, Samir, E-mail: sameer@ipr.res.in
2016-11-15
Highlights: • An integrated approach to software development for computational processing and experimental control. • Use of open source, cross platform, robust and advanced tools for computational code development. • Prediction of optimized process parameters for critical heat flux model. • Virtual experimentation for high heat flux testing of plasma facing components. - Abstract: The high heat flux testing and characterization of the divertor and first wall components are a challenging engineering problem of a tokamak. These components are subject to steady state and transient heat load of high magnitude. Therefore, the accurate prediction and control of the cooling parameters is crucial to prevent burnout. The prediction of the cooling parameters is based on the numerical solution of the critical heat flux (CHF) model. In a test facility for high heat flux testing of plasma facing components (PFC), the integration of computations and experimental control is an essential requirement. Experimental physics and industrial control system (EPICS) provides powerful tools for steering controls, data simulation, hardware interfacing and wider usability. Python provides an open source alternative for numerical computations and scripting. We have integrated these two open source technologies to develop a graphical software for a typical high heat flux experiment. The implementation uses EPICS based tools namely IOC (I/O controller) server, control system studio (CSS) and Python based tools namely Numpy, Scipy, Matplotlib and NOSE. EPICS and Python are integrated using PyEpics library. This toolkit is currently under operation at high heat flux test facility at Institute for Plasma Research (IPR) and is also useful for the experimental labs working in the similar research areas. The paper reports the software architectural design, implementation tools and rationale for their selection, test and validation.
International Nuclear Information System (INIS)
Sugandhi, Ritesh; Swamy, Rajamannar; Khirwadkar, Samir
2016-01-01
Highlights: • An integrated approach to software development for computational processing and experimental control. • Use of open source, cross platform, robust and advanced tools for computational code development. • Prediction of optimized process parameters for critical heat flux model. • Virtual experimentation for high heat flux testing of plasma facing components. - Abstract: The high heat flux testing and characterization of the divertor and first wall components are a challenging engineering problem of a tokamak. These components are subject to steady state and transient heat load of high magnitude. Therefore, the accurate prediction and control of the cooling parameters is crucial to prevent burnout. The prediction of the cooling parameters is based on the numerical solution of the critical heat flux (CHF) model. In a test facility for high heat flux testing of plasma facing components (PFC), the integration of computations and experimental control is an essential requirement. Experimental physics and industrial control system (EPICS) provides powerful tools for steering controls, data simulation, hardware interfacing and wider usability. Python provides an open source alternative for numerical computations and scripting. We have integrated these two open source technologies to develop a graphical software for a typical high heat flux experiment. The implementation uses EPICS based tools namely IOC (I/O controller) server, control system studio (CSS) and Python based tools namely Numpy, Scipy, Matplotlib and NOSE. EPICS and Python are integrated using PyEpics library. This toolkit is currently under operation at high heat flux test facility at Institute for Plasma Research (IPR) and is also useful for the experimental labs working in the similar research areas. The paper reports the software architectural design, implementation tools and rationale for their selection, test and validation.
COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES
Directory of Open Access Journals (Sweden)
M. M. Biliaiev
2016-04-01
Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which
International Nuclear Information System (INIS)
Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide
2007-01-01
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls
The Preliminary Study for Numerical Computation of 37 Rod Bundle in CANDU Reactor
International Nuclear Information System (INIS)
Jeon, Yu Mi; Park, Joo Hwan
2010-09-01
A typical CANDU 6 fuel bundle consists of 37 fuel rods supported by two endplates and separated by spacer pads at various locations. In addition, the bearing pads are brazed to each outer fuel rod with the aim of reducing the contact area between the fuel bundle and the pressure tube. Although the recent progress of CFD methods has provided opportunities for computing the thermal-hydraulic phenomena inside of a fuel channel, it is yet impossible to reflect numerical computations on the detailed shape of rod bundle due to challenges with computing mesh and memory capacity. Hence, the previous studies conducted a numerical computation for smooth channels without considering spacers and bearing pads. But, it is well known that these components are an important factor to predict the pressure drop and heat transfer rate in a channel. In this study, the new computational method is proposed to solve complex geometry such as a fuel rod bundle. Before applying a solution to the problem of the 37 rod bundle, the validity and the accuracy of the method are tested by applying the method to simple geometry. The split channel method has been proposed with the aim of computing the fully shaped CANDU fuel channel with detailed components. The validity was tested by applying the method to the single channel problem. The average temperature have similar values for the considered two methods, while the local temperature shows a slight difference by the effect of conduction heat transfer in the solid region of a rod. Based on the present result, the calculation for the fully shaped 37-rod bundle is scheduled for future work
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.
Rolls, Edmund T
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
International Nuclear Information System (INIS)
Walsh, Jonathan A.; Palmer, Todd S.; Urbatsch, Todd J.
2015-01-01
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
International Nuclear Information System (INIS)
Kumagai, H.
1987-01-01
The spatial correlations in intense ionospheric scintillations were analyzed by comparing numerical results with observational ones. The observational results were obtained by spaced-receiver scintillation measurements of VHF satellite radiowave. The numerical computation was made by using the fourth-order moment equation with fairly realistic ionospheric irregularity models, in which power-law irregularities with spectral index 4, both thin and thick slabs, and both isotropic and anisotropic irregularities, were considered. Evolution of the S(4) index and the transverse correlation function was computed. The numerical result that the transverse correlation distance decreases with the increase in S(4) was consistent with that obtained in the observation, suggesting that multiple scattering plays an important role in the intense scintillations observed. The anisotropy of irregularities proved to act as if the density fluctuation increased. This effect, as well as the effect of slab thickness, was evaluated by the total phase fluctuations that the radiowave experienced in the slab. On the basis of the comparison, the irregularity height and electron-density fluctuation which is necessary to produce a particular strength of scintillation were estimated. 30 references
Robust Face Recognition by Computing Distances from Multiple Histograms of Oriented Gradients
Karaaba, Mahir; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco
2015-01-01
The Single Sample per Person Problem is a challenging problem for face recognition algorithms. Patch-based methods have obtained some promising results for this problem. In this paper, we propose a new face recognition algorithm that is based on a combination of different histograms of oriented
Czech Academy of Sciences Publication Activity Database
Příhoda, Jaromír; Zubík, P.; Šulc, J.; Sedlář, M.
2012-01-01
Roč. 14, 4a (2012), s. 6-12 ISSN 1335-4205 R&D Projects: GA ČR GA103/09/0977 Institutional support: RVO:61388998 Keywords : open channel flow * inclined backward-facing step Subject RIV: BK - Fluid Dynamics
Energy Technology Data Exchange (ETDEWEB)
Lu, Tianfeng [Univ. of Connecticut, Storrs, CT (United States)
2017-02-16
The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily based on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.
Flow field measurements using LDA and numerical computation for rod bundle of reactor fuel assembly
International Nuclear Information System (INIS)
Hu Jun; Zou Zunyu
1995-02-01
Local mean velocity and turbulence intensity measurements were obtained with DANTEC 55 X two-dimensional Laser Dopper Anemometry (LDA) for rod bundle of reactor fuel assembly test model which was a 4 x 4 rod bundle. The data were obtained from different experimental cross-sections both upstream and downstream of the model support plate. Measurements performed at test Reynolds numbers of 1.8 x 10 4 ∼3.6 x 10 4 . The results described the local and gross effects of the support plate on upstream and downstream flow distributions. A numerical computation was also given, the experimental results are in good agreement with the numerical one and the others in references. Finally, a few suggestions were proposed for how to use the LDA system well. (11 figs.)
da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.
2018-04-01
A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.
SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes
Directory of Open Access Journals (Sweden)
Pedro Yuste
2013-09-01
Full Text Available The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH, based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.
International Nuclear Information System (INIS)
Colombo, A.G.; Jaarsma, R.J.
1982-01-01
This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described
Study on Production Management in Programming of Computer Numerical Control Machines
Directory of Open Access Journals (Sweden)
Gheorghe Popovici
2014-12-01
Full Text Available The paper presents the results of a study regarding the need for technology in programming for machinetools with computer-aided command. Engineering is the science of making skilled things. That is why, in the "factory of the future", programming engineering will have to realise the part processing on MU-CNCs (Computer Numerical Control Machines in the optimum economic variant. There is no "recipe" when it comes to technologies. In order to select the correct variant from among several technical variants, 10 technological requirements are forwarded for the engineer to take into account in MU-CNC programming. It is the first argued synthesis of the need for technological knowledge in MU-CNC programming.
Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids
DEFF Research Database (Denmark)
Creixell Mediante, Ester
. In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...
Use of computational methods for substitution and numerical dosimetry of real bones
International Nuclear Information System (INIS)
Silva, I.C.S.; Gonzalez, K.M.L.; Barbosa, A.J.A.; Lucindo Junior, C.R.; Vieira, J.W.; Lima, F.R.A.
2017-01-01
Estimating the dose that ionizing radiation deposits in the soft tissues of the skeleton within the cavities of the trabecular bones represents one of the greatest difficulties faced by numerical dosimetry. The Numerical Dosimetry Group (GDN/CNPq) Brazil, Recife-PE has used a method based on micro-CT images. The problem of the implementation of micro-CT is the difficulty in obtaining samples of real bones (OR). The objective of this work was to evaluate the sample of a virtual block of trabecular bone through the nonparametric method based on the voxel frequencies (VF) and samples of the climbing plant called Luffa aegyptica, whose dry fruit is known as vegetal bush (BV) substitution of OR samples. For this, a theoretical study of the two techniques developed by the GDN was made. The study showed in both techniques, after the dosimetric evaluations, that the actual sample can be replaced by the synthetic samples, since they have shown dose estimates close to the actual one
Wang, Xiao-Gang; Carrington, Tucker
2018-02-01
We compute numerically exact rovibrational levels of water dimer, with 12 vibrational coordinates, on the accurate CCpol-8sf ab initio flexible monomer potential energy surface [C. Leforestier et al., J. Chem. Phys. 137, 014305 (2012)]. It does not have a sum-of-products or multimode form and therefore quadrature in some form must be used. To do the calculation, it is necessary to use an efficient basis set and to develop computational tools, for evaluating the matrix-vector products required to calculate the spectrum, that obviate the need to store the potential on a 12D quadrature grid. The basis functions we use are products of monomer vibrational wavefunctions and standard rigid-monomer basis functions (which involve products of three Wigner functions). Potential matrix-vector products are evaluated using the F matrix idea previously used to compute rovibrational levels of 5-atom and 6-atom molecules. When the coupling between inter- and intra-monomer coordinates is weak, this crude adiabatic type basis is efficient (only a few monomer vibrational wavefunctions are necessary), although the calculation of matrix elements is straightforward. It is much easier to use than an adiabatic basis. The product structure of the basis is compatible with the product structure of the kinetic energy operator and this facilitates computation of matrix-vector products. Compared with the results obtained using a [6 + 6]D adiabatic approach, we find good agreement for the inter-molecular levels and larger differences for the intra-molecular water bend levels.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
On the potential of computational methods and numerical simulation in ice mechanics
International Nuclear Information System (INIS)
Bergan, Paal G; Cammaert, Gus; Skeie, Geir; Tharigopula, Venkatapathi
2010-01-01
This paper deals with the challenge of developing better methods and tools for analysing interaction between sea ice and structures and, in particular, to be able to calculate ice loads on these structures. Ice loads have traditionally been estimated using empirical data and 'engineering judgment'. However, it is believed that computational mechanics and advanced computer simulations of ice-structure interaction can play an important role in developing safer and more efficient structures, especially for irregular structural configurations. The paper explains the complexity of ice as a material in computational mechanics terms. Some key words here are large displacements and deformations, multi-body contact mechanics, instabilities, multi-phase materials, inelasticity, time dependency and creep, thermal effects, fracture and crushing, and multi-scale effects. The paper points towards the use of advanced methods like ALE formulations, mesh-less methods, particle methods, XFEM, and multi-domain formulations in order to deal with these challenges. Some examples involving numerical simulation of interaction and loads between level sea ice and offshore structures are presented. It is concluded that computational mechanics may prove to become a very useful tool for analysing structures in ice; however, much research is still needed to achieve satisfactory reliability and versatility of these methods.
Achieving high performance in numerical computations on RISC workstations and parallel systems
Energy Technology Data Exchange (ETDEWEB)
Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)
1997-08-20
The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.
International Nuclear Information System (INIS)
Katsaounis, T D
2005-01-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall
Computational domain discretization in numerical analysis of flow within granular materials
Sosnowski, Marcin
2018-06-01
The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.
On a numerical strategy to compute gravity currents of non-Newtonian fluids
International Nuclear Information System (INIS)
Vola, D.; Babik, F.; Latche, J.-C.
2004-01-01
This paper is devoted to the presentation of a numerical scheme for the simulation of gravity currents of non-Newtonian fluids. The two dimensional computational grid is fixed and the free-surface is described as a polygonal interface independent from the grid and advanced in time by a Lagrangian technique. Navier-Stokes equations are semi-discretized in time by the Characteristic-Galerkin method, which finally leads to solve a generalized Stokes problem posed on a physical domain limited by the free surface to only a part of the computational grid. To this purpose, we implement a Galerkin technique with a particular approximation space, defined as the restriction to the fluid domain of functions of a finite element space. The decomposition-coordination method allows to deal without any regularization with a variety of non-linear and possibly non-differentiable constitutive laws. Beside more analytical tests, we revisit with this numerical method some simulations of gravity currents of the literature, up to now investigated within the simplified thin-flow approximation framework
WATERLOPP V2/64: A highly parallel machine for numerical computation
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
Isna Nur Hikmah; Usep Kustiawan
2016-01-01
The reseach’s purpose was to analyze the effect of picture numeric card media toward improvement of the summation computation ability for student with intellectual disability of grade IV in SDLB. Data collected was analyzed with experiment technique and single subject research A-B design. Research result showed that: after being analyzed between condition overlap persentase was 0%. Thus, it could be concluded that there was effect of pictorial numeric card media toward summation computation a...
Katsaounis, T. D.
2005-02-01
equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical
COMPUTER VISION AND FACE RECOGNITION : Tietokonenäkö ja kasvojentunnistus
Ballester, Felipe
2010-01-01
Computer vision is a rapidly growing field, partly because of the affordable hardware (cameras, processing power) and partly because vision algorithms are starting to mature. This field started with the motivation to study how computers process images and how to apply this knowledge to develop useful programs. The purposes of this study were to give valuable knowledge for those who are interested in computer vision, and to implement a facial recognition application using the OpenCV librar...
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
Kaltenbacher, Manfred
2015-01-01
Like the previous editions also the third edition of this book combines the detailed physical modeling of mechatronic systems and their precise numerical simulation using the Finite Element (FE) method. Thereby, the basic chapter concerning the Finite Element (FE) method is enhanced, provides now also a description of higher order finite elements (both for nodal and edge finite elements) and a detailed discussion of non-conforming mesh techniques. The author enhances and improves many discussions on principles and methods. In particular, more emphasis is put on the description of single fields by adding the flow field. Corresponding to these field, the book is augmented with the new chapter about coupled flow-structural mechanical systems. Thereby, the discussion of computational aeroacoustics is extended towards perturbation approaches, which allows a decomposition of flow and acoustic quantities within the flow region. Last but not least, applications are updated and restructured so that the book meets mode...
Directory of Open Access Journals (Sweden)
Carlos Augusto do N. Oliveira
2013-01-01
Full Text Available The development of shape memory actuators has enabled noteworthy applications in the mechanical engineering, robotics, aerospace, and oil industries and in medicine. These applications have been targeted on miniaturization and taking full advantage of spaces. This article analyses a Ti-Ni shape memory actuator used as part of a flow control system. A Ti-Ni spring actuator is subjected to thermomechanical training and parameters such as transformation temperature, thermal hysteresis and shape memory effect performance were investigated. These parameters were important for understanding the behavior of the actuator related to martensitic phase transformation during the heating and cooling cycles which it undergoes when in service. The multiple regression methodology was used as a computational tool for analysing data in order to simulate and predict the results for stress and cycles where the experimental data was not developed. The results obtained using the training cycles enable actuators to be characterized and the numerical simulation to be validated.
Linge, Svein
2016-01-01
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
Programming for computations Python : a gentle introduction to numerical simulations with Python
Linge, Svein
2016-01-01
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II
Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012
2013-01-01
This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal.
Emerging opportunities in enterprise integration with open architecture computer numerical controls
Hudson, Christopher A.
1997-01-01
The shift to open-architecture machine tool computer numerical controls is providing new opportunities for metal working oriented manufacturers to streamline the entire 'art to part' process. Production cycle times, accuracy, consistency, predictability and process reliability are just some of the factors that can be improved, leading to better manufactured product at lower costs. Open architecture controllers are allowing manufacturers to apply general purpose software and hardware tools increase where previous approaches relied on proprietary and unique hardware and software. This includes DNC, SCADA, CAD, and CAM, where the increasing use of general purpose components is leading to lower cost system that are also more reliable and robust than the past proprietary approaches. In addition, a number of new opportunities exist, which in the past were likely impractical due to cost or performance constraints.
Directory of Open Access Journals (Sweden)
He Kongde
2015-01-01
Full Text Available Computational model and numerical simulation for submerged mooring monitoring platform were formulated aimed at the dynamical response by the action of flow force, which based on Hopkinson impact load theory, taken into account the catenoid effect of mooring cable and revised the difference of tension and tangential direction action force by equivalent modulus of elasticity. Solved the equation by hydraulics theory and structural mechanics theory of oceaneering, studied the response of buoy on flow force. The validity of model were checked and the results were in good agreement; the result show the buoy will engender biggish heave and swaying displacement, but the swaying displacement got stable quickly and the heaven displacement cause vibration for the vortex-induced action by the flow.
Smolinski, Tomasz G.
2010-01-01
Computer literacy plays a critical role in today's life sciences research. Without the ability to use computers to efficiently manipulate and analyze large amounts of data resulting from biological experiments and simulations, many of the pressing questions in the life sciences could not be answered. Today's undergraduates, despite the ubiquity of…
Pisharady, Pramod Kumar; Poh, Loh Ai
2014-01-01
This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...
HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics
International Nuclear Information System (INIS)
McCann, R.A.
1987-04-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs
Numerical simulation of fragmentation of hot metal and oxide melts with the computer code IVA3
International Nuclear Information System (INIS)
Mussa, S.; Tromm, W.
1994-01-01
The phenomena of fragmentation of melts caused by water-inlet from the bottom with the computer code IVA3/11,12,13/ are investigated. With the computer code IVA3 three-component-multiphase flows can be numerically simulated. Two geometrical models are used. Both consist of a cylindrical vessel for water lying beneath a cylindrical vessel for melt. The vessels are connected to each other through a hole. Steel and UO 2 melts are. The following parameters were varied: the type of the melt (steel,UO 2 ), the water supply pressure and the geometry of the hole in the bottom plate through which the water and melt vessels are connected. As results of the numerical simulations temperature and pressure versus time curves are plotted. Additionally the volume flow rates and the volume fractions of the various phases in the vessels and the increase in surface and enthalpy of the melt during the time of simulation are depicted. With steel melts the rate of fragmentation increases with increasing water pressure and melt temperature, whereby stable channels are formed in the melt layer showing a very low flow resistance for steam. With UO 2 the formations of channels are also observed. However, these channels are not so stable that they eventually break apart and lead to the fragmentation of the UO 2 melt in drops. The fragmentation of the steel melt in water vessel is less than that of UO 2 . No essential solidification of the melt is observed in the respective duration of the simulations. However, a small drop in the melt temperature is observed. With a slight or no water pressure the melt flows from the upper vessel into the water vessel via the connecting hole. The processes take place in a very slow manner and with such a low steam production so that despite the occuring pressure peaks no sign of steam explosions could be observed. (orig./HP) [de
Khabaza, I M
1960-01-01
Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Directory of Open Access Journals (Sweden)
Arun Arjunan
2015-08-01
Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.
Numerical computations of interior transmission eigenvalues for scattering objects with cavities
International Nuclear Information System (INIS)
Peters, Stefan; Kleefeld, Andreas
2016-01-01
In this article we extend the inside-outside duality for acoustic transmission eigenvalue problems by allowing scattering objects that may contain cavities. In this context we provide the functional analytical framework necessary to transfer the techniques that have been used in Kirsch and Lechleiter (2013 Inverse Problems, 29 104011) to derive the inside-outside duality. Additionally, extensive numerical results are presented to show that we are able to successfully detect interior transmission eigenvalues with the inside-outside duality approach for a variety of obstacles with and without cavities in three dimensions. In this context, we also discuss the advantages and disadvantages of the inside-outside duality approach from a numerical point of view. Furthermore we derive the integral equations necessary to extend the algorithm in Kleefeld (2013 Inverse Problems, 29 104012) to compute highly accurate interior transmission eigenvalues for scattering objects with cavities, which we will then use as reference values to examine the accuracy of the inside-outside duality algorithm. (paper)
Cienfuegos, R.; Duarte, L.; Hernandez, E.
2008-12-01
Charasteristic frequencies of gravity waves generated by wind and propagating towards the coast are usually comprised between 0.05Hz and 1Hz. Nevertheless, lower frequecy waves, in the range of 0.001Hz and 0.05Hz, have been observed in the nearshore zone. Those long waves, termed as infragravity waves, are generated by complex nonlinear mechanisms affecting the propagation of irregular waves up to the coast. The groupiness of an incident random wave field may be responsible for producing a slow modulation of the mean water surface thus generating bound long waves travelling at the group speed. Similarly, a quasi- periodic oscillation of the break-point location, will be accompained by a slow modulation of set-up/set-down in the surf zone and generation and release of long waves. If the primary structure of the carrying incident gravity waves is destroyed (e.g. by breaking), forced long waves can be freely released and even reflected at the coast. Infragravity waves can affect port operation through resonating conditions, or strongly affect sediment transport and beach morphodynamics. In the present study we investigate infragravity wave generation mechanisms both, from experiments and numerical computations. Measurements were conducted at the 70-meter long wave tank, located at the Instituto Nacional de Hidraulica (Chile), prepared with a beach of very mild slope of 1/80 in order to produce large surf zone extensions. A random JONSWAP type wave field (h0=0.52m, fp=0.25Hz, Hmo=0.17m) was generated by a piston wave-maker and measurements of the free surface displacements were performed all over its length at high spatial resolution (0.2m to 1m). Velocity profiles were also measured at four verticals inside the surf zone using an ADV. Correlation maps of wave group envelopes and infragravity waves are computed in order to identify long wave generation and dynamics in the experimental set-up. It appears that both mechanisms (groupiness and break-point oscillation) are
Energy conserving numerical methods for the computation of complex vortical flows
Allaneau, Yves
One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our
Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems
Bäcker, A.
Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.
Direct numerical simulation of reactor two-phase flows enabled by high-performance computing
Energy Technology Data Exchange (ETDEWEB)
Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.
2018-04-01
Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.
Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin
2010-05-01
Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the
Use of a Green Familiar Faces Paradigm Improves P300-Speller Brain-Computer Interface Performance.
Li, Qi; Liu, Shuai; Li, Jian; Bai, Ou
2015-01-01
A recent study showed improved performance of the P300-speller when the flashing row or column was overlaid with translucent pictures of familiar faces (FF spelling paradigm). However, the performance of the P300-speller is not yet satisfactory due to its low classification accuracy and information transfer rate. To investigate whether P300-speller performance is further improved when the chromatic property and the FF spelling paradigm are combined. We proposed a new spelling paradigm in which the flashing row or column is overlaid with translucent green pictures of familiar faces (GFF spelling paradigm). We analyzed the ERP waveforms elicited by the FF and proposed GFF spelling paradigms and compared P300-speller performance between the two paradigms. Significant differences in the amplitudes of four ERP components (N170, VPP, P300, and P600f) were observed between both spelling paradigms. Compared to the FF spelling paradigm, the GFF spelling paradigm elicited ERP waveforms of higher amplitudes and resulted in improved P300-speller performance. Combining the chromatic property (green color) and the FF spelling paradigm led to better classification accuracy and an increased information transfer rate. These findings demonstrate a promising new approach for improving the performance of the P300-speller.
[Facing the challenges of ubiquitous computing in the health care sector].
Georgieff, Peter; Friedewald, Michael
2010-01-01
The steady progress of microelectronics, communications and information technology will enable the realisation of the vision for "ubiquitous computing" where the Internet extends into the real world embracing everyday objects. The necessary technical basis is already in place. Due to their diminishing size, constantly falling price and declining energy consumption, processors, communications modules and sensors are being increasingly integrated into everyday objects today. This development is opening up huge opportunities for both the economy and individuals. In the present paper we discuss possible applications, but also technical, social and economic barriers to a wide-spread use of ubiquitous computing in the health care sector. .
Tri-P-LETS: Changing the Face of High School Computer Science
Sherrell, Linda; Malasri, Kriangsiri; Mills, David; Thomas, Allen; Greer, James
2012-01-01
From 2004-2007, the University of Memphis carried out the NSF-funded Tri-P-LETS (Three P Learning Environment for Teachers and Students) project to improve local high-school computer science curricula. The project reached a total of 58 classrooms in eleven high schools emphasizing problem solving skills, programming concepts as opposed to syntax,…
Adebajo, Sylvia; Obianwu, Otibho; Eluwa, George; Vu, Lung; Oginni, Ayo; Tun, Waimar; Sheehy, Meredith; Ahonsi, Babatunde; Bashorun, Adebobola; Idogho, Omokhudu; Karlyn, Andrew
2014-01-01
INTRODUCTION: Face-to-face (FTF) interviews are the most frequently used means of obtaining information on sexual and drug injecting behaviours from men who have sex with men (MSM) and men who inject drugs (MWID). However, accurate information on these behaviours may be difficult to elicit because of sociocultural hostility towards these populations and the criminalization associated with these behaviours. Audio computer assisted self-interview (ACASI) is an interviewing technique that may mi...
Directory of Open Access Journals (Sweden)
Isna Nur Hikmah
2016-12-01
Full Text Available The reseach’s purpose was to analyze the effect of picture numeric card media toward improvement of the summation computation ability for student with intellectual disability of grade IV in SDLB. Data collected was analyzed with experiment technique and single subject research A-B design. Research result showed that: after being analyzed between condition overlap persentase was 0%. Thus, it could be concluded that there was effect of pictorial numeric card media toward summation computation ability of student with intellectual disability
Energy Technology Data Exchange (ETDEWEB)
Kako, T.; Watanabe, T. [eds.
2000-06-01
This is the proceeding of 'study on numerical methods related to plasma confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. There are also various lectures on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. Separate abstracts were presented for 13 of the papers in this report. The remaining 6 were considered outside the subject scope of INIS. (J.P.N.)
Directory of Open Access Journals (Sweden)
Hyung-Jun Kim
2018-01-01
Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.
Directory of Open Access Journals (Sweden)
Liang Lv
2016-01-01
Full Text Available Computed tomography of chemiluminescence (CTC is a promising technique for combustion diagnostics, providing instantaneous 3D information of flame structures, especially in harsh circumstance. This work focuses on assessing the feasibility of CTC and investigating structures of hydrogen-air premixed laminar flames using CTC. A numerical phantom study was performed to assess the accuracy of the reconstruction algorithm. A well-designed burner was used to generate stable hydrogen-air premixed laminar flames. The OH⁎ chemiluminescence intensity field reconstructed from 37 views using CTC was compared to the OH⁎ chemiluminescence distributions recorded directly by a single ICCD camera from the side view. The flame structures in different flow velocities and equivalence ratios were analyzed using the reconstructions. The results show that the CTC technique can effectively indicate real distributions of the flame chemiluminescence. The height of the flame becomes larger with increasing flow velocities, whereas it decreases with increasing equivalence ratios (no larger than 1. The increasing flow velocities gradually lift the flame reaction zones. A critical cone angle of 4.76 degrees is obtained to avoid blow-off. These results set up a foundation for next studies and the methods can be further developed to reconstruct 3D structures of flames.
Numerical Simulation of Mixing in a Micro-well Scale Bioreactor by Computational Fluid Dynamics
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The introduction of the multi-well plate miniaturisation technology with its associated automated dispensers, readers and integrated systems coupled with advances in life sciences has a propelling effect on the rate at which new potential drug molecules are discovered. The translation of these discoveries to real outcome now demands parallel approaches which allow large numbers of process options to be rapidly assessed. The engineering challenges in achieving this provide the motivation for the proposed work. In this work we used computational fluid dynamics(CFD) analysis to study flow conditions in a gas-liquid contactor which has the potential to be used as a fermenter on a multi-well format. The bioreactor had a working volume of 6.5 mL with the major dimensions equal to those of a single well of a 24-well plate. The 6.5 mL bioreactor was mechanically agitated and aerated by a single sparger placed beneath the bottom impeller. Detailed numerical procedure for solving the governing flow equations is given. The CFD results are combined with population balance equations to establish the size of the bubbles and their distribution in the bioreactor, Power curves with and without aeration are provided based on the simulated results.
The Design of a Templated C++ Small Vector Class for Numerical Computing
Moran, Patrick J.
2000-01-01
We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.
Computer simulations of low energy displacement cascades in a face centered cubic lattice
International Nuclear Information System (INIS)
Schiffgens, J.O.; Bourquin, R.D.
1976-09-01
Computer simulations of atomic motion in a copper lattice following the production of primary knock-on atoms (PKAs) with energies from 25 to 200 eV are discussed. In this study, a mixed Moliere-Englert pair potential is used to model the copper lattice. The computer code COMENT, which employs the dynamical method, is used to analyze the motion of up to 6000 atoms per time step during cascade evolution. The atoms are specified as initially at rest on the sites of an ideal lattice. A matrix of 12 PKA directions and 6 PKA energies is investigated. Displacement thresholds in the [110] and [100] are calculated to be approximately 17 and 20 eV, respectively. A table showing the stability of isolated Frenkel pairs with different vacancy and interstitial orientations and separations is presented. The numbers of Frenkel pairs and atomic replacements are tabulated as a function of PKA direction for each energy. For PKA energies of 25, 50, 75, 100, 150, and 200 eV, the average number of Frenkel pairs per PKA are 0.4, 0.6, 1.0, 1.2, 1.4, and 2.2 and the average numbers of replacements per PKA are 2.4, 4.0, 3.3, 4.9, 9.3, and 15.8
Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems
DEFF Research Database (Denmark)
Alstrøm, Poul
2001-01-01
This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system and e...
Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems
DEFF Research Database (Denmark)
Alstrøm, Poul
2000-01-01
This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system and e...
Combining Distance and Face-To Teaching and Learning in Spatial Computations
Gulland, E.-K.; Schut, A. G. T.; Veenendaal, B.
2011-09-01
Retention and passing rates as well as student engagement in computer programming and problem solving units are a major concern in tertiary spatial science courses. A number of initiatives were implemented to improve this. A pilot study reviews the changes made to the teaching and learning environment, including the addition of new resources and modifications to assessments, and investigates their effectiveness. In particular, the study focuses on the differences between students studying in traditional, oncampus mode and distance, e-learning mode. Student results and retention rates from 2009-2011, data from in-lecture clicker response units and two anonymous surveys collected in 2011 were analysed. Early results indicate that grades improved for engaged students but pass rates or grades of the struggling cohort of students did not improve significantly.
Directory of Open Access Journals (Sweden)
M Pomarède
2016-09-01
Full Text Available Numerical simulation of Vortex-Induced-Vibrations (VIV of a rigid circular elastically-mounted cylinder submitted to a fluid cross-flow has been extensively studied over the past decades, both experimentally and numerically, because of its theoretical and practical interest for understanding Flow-Induced-Vibrations (FIV problems. In this context, the present article aims to expose a numerical study based on fully-coupled fluid-solid computations compared to previously published work [34], [36]. The computational procedure relies on a partitioned method ensuring the coupling between fluid and structure solvers. The fluid solver involves a moving mesh formulation for simulation of the fluid structure interface motion. Energy exchanges between fluid and solid models are ensured through convenient numerical schemes. The present study is devoted to a low Reynolds number configuration. Cylinder motion magnitude, hydrodynamic forces, oscillation frequency and fluid vortex shedding modes are investigated and the “lock-in” phenomenon is reproduced numerically. These numerical results are proposed for code validation purposes before investigating larger industrial applications such as configurations involving tube arrays under cross-flows [4].
Blazevski, Daniel; Franklin, Jennifer
2012-12-01
Scattering theory is a convenient way to describe systems that are subject to time-dependent perturbations which are localized in time. Using scattering theory, one can compute time-dependent invariant objects for the perturbed system knowing the invariant objects of the unperturbed system. In this paper, we use scattering theory to give numerical computations of invariant manifolds appearing in laser-driven reactions. In this setting, invariant manifolds separate regions of phase space that lead to different outcomes of the reaction and can be used to compute reaction rates.
Numerical spin tracking in a synchrotron computer code Spink: Examples (RHIC)
International Nuclear Information System (INIS)
Luccio, A.
1995-01-01
In the course of acceleration of polarized protons in a synchrotron, many depolarizing resonances are encountered. They are classified in two categories: Intrinsic resonances that depend on the lattice structure of the ring and arise from the coupling of betatron oscillations with horizontal magnetic fields, and imperfection resonances caused by orbit distortions due to field errors. In general, the spectrum of resonances vs spin tune Gγ(G = 1.7928, the proton gyromagnetic anomaly, and y the proton relativistic energy ratio) for a given lattice tune ν, or vs ν for a given Gγ, contains a multitude of lines with various amplitudes or resonance strengths. The depolarization due to the resonance lines can be studied by numerically tracking protons with spin in a model accelerator. Tracking will allow one to check the strength of resonances, to study the effects of devices like Siberian Snakes, to find safe lattice tune regions where to operate, and finally to study in detail the operation of special devices such as Spin Flippers. A few computer codes exist that calculate resonance strengths E k and perform tracking, for proton and electron machines. Most relevant to our work for the AGS and RHIC machines are the programs Depol and Snake. Depol, calculates the E k 's by Fourier analysis. The input to Depol is the output of a machine model code, such as Synch or Mad, containing all details of the lattice. Snake, does the tracking, starting from a synthetic machine, that contains a certain number of periods, of FODO cells, of Siberian snakes, etc. We believed the complexities of machines like the AGS or RHIC could not be adequately represented by Snake. Then, we decided to write a new code, Spink, that combines some of the features of Depol and Snake. I.E., Spink reads a Mad output like Depol and tracks as Snake does. The structure of the code and examples for RHIC are described in the following
A Face Inversion Effect without a Face
Brandman, Talia; Yovel, Galit
2012-01-01
Numerous studies have attributed the face inversion effect (FIE) to configural processing of internal facial features in upright but not inverted faces. Recent findings suggest that face mechanisms can be activated by faceless stimuli presented in the context of a body. Here we asked whether faceless stimuli with or without body context may induce…
Radiation doses in pediatric computed tomography procedures: challenges facing new technologies
International Nuclear Information System (INIS)
Cotelo, E.; Padilla, M.; Dibarboure, L.
2008-01-01
Despite the fact that in recent years an increasing number of radiologists and radiological technologists have been applying radiation dose optimization techniques in paediatric Computed Tomography (CT) examinations, dual and multi -slice CT (MSCT) scanners present a new challenge in Radiation Protection (RP). While on one hand these scanners are provided with Automatic Exposure Control (AEC) devices, dose reduction modes and dose estimation software, on the other hand Quality Control (QC) tests and CT Kerma Index (C) measurements and patient dose estimation present specific difficulties and require changes or adaptations of traditional QC protocols. This implies a major challenge in most developing countries where Quality Assurance Programmes (QAP) have not been implemented yet and there is a shortage in the number of medical physicists This paper analyses clinical and technical protocols as well as patient doses in 204 CT body procedures performed in 154 children. The investigation was carried out in a paediatric reference hospital of Uruguay, where are performed an average of 450 paediatric CT examinations per month in a sole CT dual scanner. Besides, C VOL reported from the scanner display was registered in order to be related with the same dosimetric quantity derived from technical parameters and C values published on tables. Results showed that not all the radiologists applied the same protocol in similar clinical situations delivering unnecessary patient dose with no significant differences in image quality. Moreover, it was found that dose reduction modes represent a drawback in order to estimate patient dose when mA changes according to tissue attenuation, in most cases in each rotation. The study concluded on the importance of QAP that must include education on RP of radiologists and technologists, as well as in the need of medical physicists to perform QC tests and patient dose estimations and measurements. (author)
Guise, Jennifer; Widdicombe, Sue; McKinlay, Andy
2007-01-01
ME (Myalgic Encephalomyelitis) or CFS (chronic fatigue syndrome) is a debilitating illness for which no cause or medical tests have been identified. Debates over its nature have generated interest from qualitative researchers. However, participants are difficult to recruit because of the nature of their condition. Therefore, this study explores the utility of the internet as a means of eliciting accounts. We analyse data from focus groups and the internet in order to ascertain the extent to which previous research findings apply to the internet domain. Interviews were conducted among 49 members of internet groups (38 chatline, 11 personal) and 7 members of two face-to-face support groups. Discourse analysis of descriptions and accounts of ME or CFS revealed similar devices and interactional concerns in both internet and face-to-face communication. Participants constructed their condition as serious, enigmatic and not psychological. These functioned to deflect problematic assumptions about ME or CFS and to manage their accountability for the illness and its effects.
Computer numerically controlled (CNC) aspheric shaping with toroidal Wheels (Abstract Only)
Ketelsen, D.; Kittrell, W. C.; Kuhn, W. M.; Parks, R. E.; Lamb, George L.; Baker, Lynn
1987-01-01
Contouring with computer numerically controlled (CNC) machines can be accomplished with several different tool geometries and coordinated machine axes. To minimize the number of coordinated axes for nonsymmetric work to three, it is common practice to use a spherically shaped tool such as a ball-end mill. However, to minimize grooving due to the feed and ball radius, it is desirable to use a long ball radius, but there is clearly a practical limit to ball diameter with the spherical tool. We have found that the use of commercially available toroidal wheels permits long effective cutting radii, which in turn improve finish and minimize grooving for a set feed. In addition, toroidal wheels are easier than spherical wheels to center accurately. Cutting parameters are also easier to control because the feed rate past the tool does not change as the slope of the work changes. The drawback to the toroidal wheel is the more complex calculation of the tool path. Of course, once the algorithm is worked out, the tool path is as easily calculated as for a spherical tool. We have performed two experiments with the Large Optical Generator (LOG) that were ideally suited to three-axis contouring--surfaces that have no axis of rotational symmetry. By oscillating the cutting head horizontally or vertically (in addition to the motions required to generate the power of the surface) , and carefully coordinating those motions with table rotation, the mostly astigmatic departure for these surfaces is produced. The first experiment was a pair of reflector molds that together correct the spherical aberration of the Arecibo radio telescope. The larger of these was 5 m in diameter and had a 12 cm departure from the best-fit sphere. The second experiment was the generation of a purely astigmatic surface to demonstrate the feasibility of producing axially symmetric asphe.rics while mounted and rotated about any off-axis point. Measurements of the latter (the first experiment had relatively
Numerical analysis of choked converging nozzle flows with surface ...
Indian Academy of Sciences (India)
numerically investigated by means of a recent computational model that ..... dependent nonlinear formulations, where the solution scheme is most likely to face with .... boundary and geometric conditions, to (15–16), also proves the validity.
Dupraz, Maxime; Beutier, Guillaume; Rodney, David; Mordehai, Dan; Verdier, Marc
2015-06-01
Crystal defects induce strong distortions in diffraction patterns. A single defect alone can yield strong and fine features that are observed in high-resolution diffraction experiments such as coherent X-ray diffraction. The case of face-centred cubic nanocrystals is studied numerically and the signatures of typical defects close to Bragg positions are identified. Crystals of a few tens of nanometres are modelled with realistic atomic potentials and 'relaxed' after introduction of well defined defects such as pure screw or edge dislocations, or Frank or prismatic loops. Diffraction patterns calculated in the kinematic approximation reveal various signatures of the defects depending on the Miller indices. They are strongly modified by the dissociation of the dislocations. Selection rules on the Miller indices are provided, to observe the maximum effect of given crystal defects in the initial and relaxed configurations. The effect of several physical and geometrical parameters such as stacking fault energy, crystal shape and defect position are discussed. The method is illustrated on a complex structure resulting from the simulated nanoindentation of a gold nanocrystal.
3-D Numerical Realization of Contituent-Level FRP Composites Using X-Ray Computer Tomography
National Aeronautics and Space Administration — Develop met . hods coupling state-of-the-art, nondestructive characterization techniques with three-dimensional, numerical modeling to study the constituent-level...
On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equations
Directory of Open Access Journals (Sweden)
H. Montazeri
2012-01-01
Full Text Available We consider a system of nonlinear equations F(x=0. A new iterative method for solving this problem numerically is suggested. The analytical discussions of the method are provided to reveal its sixth order of convergence. A discussion on the efficiency index of the contribution with comparison to the other iterative methods is also given. Finally, numerical tests illustrate the theoretical aspects using the programming package Mathematica.
Numerical Model of Air Valve For Computation of One-dimensional Flow
Directory of Open Access Journals (Sweden)
Daniel HIMR
2014-06-01
Full Text Available The paper is focused on a numerical simulation of unsteady flow in a pipeline. The special attention is paid to a numerical model of an air valve, which has to include all possible regimes: critical/subcritical inflow and critical/subcritical outflow of air. Thermodynamic equation of subcritical mass flow was simplified to get more friendly shape of relevant equations, which enables easier solution of the problem.
International Nuclear Information System (INIS)
Trent, D.S.; Eyler, L.L.; Budden, M.J.
1983-09-01
This document describes the numerical methods, current capabilities, and the use of the TEMPEST (Version L, MOD 2) computer program. TEMPEST is a transient, three-dimensional, hydrothermal computer program that is designed to analyze a broad range of coupled fluid dynamic and heat transfer systems of particular interest to the Fast Breeder Reactor thermal-hydraulic design community. The full three-dimensional, time-dependent equations of motion, continuity, and heat transport are solved for either laminar or turbulent fluid flow, including heat diffusion and generation in both solid and liquid materials. 10 refs., 22 figs., 2 tabs
Directory of Open Access Journals (Sweden)
Suheel Abdullah Malik
2014-01-01
Full Text Available We present a hybrid heuristic computing method for the numerical solution of nonlinear singular boundary value problems arising in physiology. The approximate solution is deduced as a linear combination of some log sigmoid basis functions. A fitness function representing the sum of the mean square error of the given nonlinear ordinary differential equation (ODE and its boundary conditions is formulated. The optimization of the unknown adjustable parameters contained in the fitness function is performed by the hybrid heuristic computation algorithm based on genetic algorithm (GA, interior point algorithm (IPA, and active set algorithm (ASA. The efficiency and the viability of the proposed method are confirmed by solving three examples from physiology. The obtained approximate solutions are found in excellent agreement with the exact solutions as well as some conventional numerical solutions.
Romani, Xiana; Nirschl, Hermann
2010-01-01
Centrifugal separation equipment, such as solid bowl centrifuges, is used to carry out an effective separation of fine particles from industrial fluids. Knowledge of the streams and sedimentation behavior inside solid bowl centrifuges is necessary to determine the geometry and the process parameters that lead to an optimal performance. Regarding a given industrial centrifuge geometry, a grid was built to calculate numerically the multiphase flow of water, air, and particles with a computation...
International Nuclear Information System (INIS)
Soubbaramayer
1977-01-01
A numerical code (CENTAURE) built up with 36000 cards and 343 subroutines to investigate the full interconnected field of velocity, temperature, pressure and isotopic concentration in a gas centrifuge is presented. The complete set of Navier-Stokes equations, continuity equation, energy balance, isotopic diffusion equation and gas state law, form the basis of the model with proper boundary conditions, depending essentially upon the nature of the countercurrent and the thermal condition of the walls. Sources and sinks are located either inside the centrifuge or in the boundaries. The model includes not only the usual terms of CORIOLIS, compressibility, viscosity and thermal diffusivity but also the non linear terms of inertia in momentum equations, thermal convection and viscous dissipation in energy equation. The computation is based on finite element method and direct resolution instead of finite difference and iterative process. The code is quite flexible and well adapted to compute many physical cases in one centrifuge: the computation time per one case is then very small (we work with an IBM-360-91). The numerical results are exploited with the help of a visualisation screen IBM 2250. The possibilities of the code are exposed with numerical illustration. Some results are commented and compared to linear theories
International Nuclear Information System (INIS)
Zee, S.K.
1987-01-01
A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies
Sajben, Miklos; Freund, Donald D.
1998-01-01
The ability to predict the dynamics of integrated inlet/compressor systems is an important part of designing high-speed propulsion systems. The boundaries of the performance envelope are often defined by undesirable transient phenomena in the inlet (unstart, buzz, etc.) in response to disturbances originated either in the engine or in the atmosphere. Stability margins used to compensate for the inability to accurately predict such processes lead to weight and performance penalties, which translate into a reduction in vehicle range. The prediction of transients in an inlet/compressor system requires either the coupling of two complex, unsteady codes (one for the inlet and one for the engine) or else a reliable characterization of the inlet/compressor interface, by specifying a boundary condition. In the context of engineering development programs, only the second option is viable economically. Computations of unsteady inlet flows invariably rely on simple compressor-face boundary conditions (CFBC's). Currently, customary conditions include choked flow, constant static pressure, constant axial velocity, constant Mach number or constant mass flow per unit area. These conditions are straightforward extensions of practices that are valid for and work well with steady inlet flows. Unfortunately, it is not at all likely that any flow property would stay constant during a complex system transient. At the start of this effort, no experimental observation existed that could be used to formulate of verify any of the CFBC'S. This lack of hard information represented a risk for a development program that has been recognized to be unacceptably large. The goal of the present effort was to generate such data. Disturbances reaching the compressor face in flight may have complex spatial structures and temporal histories. Small amplitude disturbances may be decomposed into acoustic, vorticity and entropy contributions that are uncoupled if the undisturbed flow is uniform. This study
International Nuclear Information System (INIS)
Borcherds, P
2003-01-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
An analytically based numerical method for computing view factors in real urban environments
Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun
2018-01-01
A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.
2007-01-01
Background At postgraduate level evidence based medicine (EBM) is currently taught through tutor based lectures. Computer based sessions fit around doctors' workloads, and standardise the quality of educational provision. There have been no randomized controlled trials comparing computer based sessions with traditional lectures at postgraduate level within medicine. Methods This was a randomised controlled trial involving six postgraduate education centres in the West Midlands, U.K. Fifty five newly qualified foundation year one doctors (U.S internship equivalent) were randomised to either computer based sessions or an equivalent lecture in EBM and systematic reviews. The change from pre to post-intervention score was measured using a validated questionnaire assessing knowledge (primary outcome) and attitudes (secondary outcome). Results Both groups were similar at baseline. Participants' improvement in knowledge in the computer based group was equivalent to the lecture based group (gain in score: 2.1 [S.D = 2.0] versus 1.9 [S.D = 2.4]; ANCOVA p = 0.078). Attitudinal gains were similar in both groups. Conclusion On the basis of our findings we feel computer based teaching and learning is as effective as typical lecture based teaching sessions for educating postgraduates in EBM and systematic reviews. PMID:17659076
Numerical investigation of the High Temperature Reactor (VHTR) using computational fluid dynamics
International Nuclear Information System (INIS)
Pinto, Joao Pedro C.T.A.; Santos, Andre A. Campagnole dos; Mesquita, Amir Z.
2013-01-01
This work consists to evaluate and continue the study that is being developed in the Laboratory of Thermo-Hydraulics of the CNEN/CDTN (Centro de Desenvolvimento da Tecnologia Nuclear), aiming to validate the methods and procedures used in the numerical calculations of fluid flow in fuel elements of the core of the VHTR
Directory of Open Access Journals (Sweden)
Ahmed M. Elsayed
2013-01-01
Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.
National Research Council Canada - National Science Library
Harmon, Bruce N; Dobrovitski, Viatcheslav V
2007-01-01
...) have also been developed and applied. Most recently, specific strategies for quantum control have been investigated for realistic systems in order to extend the coherence times for spin-based quantum computing implementations...
SINCRO/CAR: An interactive numerical system for computer-aided control engineering and maintenance
International Nuclear Information System (INIS)
Zwingelstein, G.; Despujols, A.
1986-01-01
This presentation describes a dialogue-oriented software implemented on a portable computer for computer-aided engineering and training in control instrumentation and also for on-line verification of the performances of the analog controllers installed on power plants. The SINCRO/CAR software includes algorithms for controller design, simulation, identification, optimization, frequency response and real time data acquisition. Various results obtained on fossil-fired and nuclear plants are given to illustrate the efficiency of the SINCRO/CAR software
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
Rodriguez-Lorenzo, Andres; Audolfsson, Thorir; Wong, Corrine; Cheng, Angela; Arbique, Gary; Nowinski, Daniel; Rozen, Shai
2015-10-01
The aim of this study was to evaluate the contribution of a single unilateral facial vein in the venous outflow of total-face allograft using three-dimensional computed tomographic imaging techniques to further elucidate the mechanisms of venous complications following total-face transplant. Full-face soft-tissue flaps were harvested from fresh adult human cadavers. A single facial vein was identified and injected distally to the submandibular gland with a radiopaque contrast (barium sulfate/gelatin mixture) in every specimen. Following vascular injections, three-dimensional computed tomographic venographies of the faces were performed. Images were viewed using TeraRecon Software (Teracon, Inc., San Mateo, CA, USA) allowing analysis of the venous anatomy and perfusion in different facial subunits by observing radiopaque filling venous patterns. Three-dimensional computed tomographic venographies demonstrated a venous network with different degrees of perfusion in subunits of the face in relation to the facial vein injection side: 100% of ipsilateral and contralateral forehead units, 100% of ipsilateral and 75% of contralateral periorbital units, 100% of ipsilateral and 25% of contralateral cheek units, 100% of ipsilateral and 75% of contralateral nose units, 100% of ipsilateral and 75% of contralateral upper lip units, 100% of ipsilateral and 25% of contralateral lower lip units, and 50% of ipsilateral and 25% of contralateral chin units. Venographies of the full-face grafts revealed better perfusion in the ipsilateral hemifaces from the facial vein in comparison with the contralateral hemifaces. Reduced perfusion was observed mostly in the contralateral cheek unit and contralateral lower face including the lower lip and chin units. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
A numerical scheme using multi-shockpeakons to compute solutions of the Degasperis-Procesi equation
Directory of Open Access Journals (Sweden)
Hakon A. Hoel
2007-07-01
Full Text Available We consider a numerical scheme for entropy weak solutions of the DP (Degasperis-Procesi equation $u_t - u_{xxt} + 4uu_x = 3u_{x}u_{xx}+ uu_{xxx}$. Multi-shockpeakons, functions of the form $$ u(x,t =sum_{i=1}^n(m_i(t -hbox{sign}(x-x_i(ts_i(te^{-|x-x_i(t|}, $$ are solutions of the DP equation with a special property; their evolution in time is described by a dynamical system of ODEs. This property makes multi-shockpeakons relatively easy to simulate numerically. We prove that if we are given a non-negative initial function $u_0 in L^1(mathbb{R}cap BV(mathbb{R}$ such that $u_{0} - u_{0,x}$ is a positive Radon measure, then one can construct a sequence of multi-shockpeakons which converges to the unique entropy weak solution in $mathbb{R}imes[0,T$ for any $T>0$. From this convergence result, we construct a multi-shockpeakon based numerical scheme for solving the DP equation.
Canadell, Marta; Haro, Àlex
2017-12-01
We present several algorithms for computing normally hyperbolic invariant tori carrying quasi-periodic motion of a fixed frequency in families of dynamical systems. The algorithms are based on a KAM scheme presented in Canadell and Haro (J Nonlinear Sci, 2016. doi: 10.1007/s00332-017-9389-y), to find the parameterization of the torus with prescribed dynamics by detuning parameters of the model. The algorithms use different hyperbolicity and reducibility properties and, in particular, compute also the invariant bundles and Floquet transformations. We implement these methods in several 2-parameter families of dynamical systems, to compute quasi-periodic arcs, that is, the parameters for which 1D normally hyperbolic invariant tori with a given fixed frequency do exist. The implementation lets us to perform the continuations up to the tip of the quasi-periodic arcs, for which the invariant curves break down. Three different mechanisms of breakdown are analyzed, using several observables, leading to several conjectures.
DEFF Research Database (Denmark)
Wang, Weizhi; Wu, Minghao; Palm, Johannes
2018-01-01
for almost linear incident waves. First, we show that the computational fluid dynamics simulations have acceptable agreement to experimental data. We then present a verification and validation study focusing on the solution verification covering spatial and temporal discretization, iterative and domain......The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...
Unified algorithm for partial differential equations and examples of numerical computation
International Nuclear Information System (INIS)
Watanabe, Tsuguhiro
1999-01-01
A new unified algorithm is proposed to solve partial differential equations which describe nonlinear boundary value problems, eigenvalue problems and time developing boundary value problems. The algorithm is composed of implicit difference scheme and multiple shooting scheme and is named as HIDM (Higher order Implicit Difference Method). A new prototype computer programs for 2-dimensional partial differential equations is constructed and tested successfully to several problems. Extension of the computer programs to 3 or more higher order dimension problems will be easy due to the direct product type difference scheme. (author)
Energy Technology Data Exchange (ETDEWEB)
Faydide, B. [Commissariat a l`Energie Atomique, Grenoble (France)
1997-07-01
This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained with Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.
International Nuclear Information System (INIS)
Faydide, B.
1997-01-01
This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained with Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients
International Nuclear Information System (INIS)
Yudov, Y.V.
2001-01-01
The functional part of the KORSAR computer code is based on the computational unit for the reactor system thermal-hydraulics and other thermal power systems with water cooling. The two-phase flow dynamics of the thermal-hydraulic network is modelled by KORSAR in one-dimensional two-fluid (non-equilibrium and nonhomogeneous) approximation with the same pressure of both phases. Each phase is characterized by parameters averaged over the channel sections, and described by the conservation equations for mass, energy and momentum. The KORSAR computer code relies upon a novel approach to mathematical modelling of two-phase dispersed-annular flows. This approach allows a two-fluid model to differentiate the effects of the liquid film and droplets in the gas core on the flow characteristics. A semi-implicit numerical scheme has been chosen for deriving discrete analogs the conservation equations in KORSAR. In the semi-implicit numerical scheme, solution of finite-difference equations is reduced to the problem of determining the pressure field at a new time level. For the one-channel case, the pressure field is found from the solution of a system of linear algebraic equations by using the tri-diagonal matrix method. In the branched network calculation, the matrix of coefficients in the equations describing the pressure field is no longer tri-diagonal but has a sparseness structure. In this case, the system of linear equations for the pressure field can be solved with any of the known classical methods. Such an approach is implemented in the existing best-estimate thermal-hydraulic computer codes (TRAC, RELAP5, etc.) For the KORSAR computer code, we have developed a new non-iterative method for calculating the pressure field in the network of any topology. This method is based on the tri-diagonal matrix method and performs well when solving the thermal-hydraulic network problems. (author)
International Nuclear Information System (INIS)
Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu
2014-01-01
Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.
Farkas, Árpád; Balásházy, Imre
2015-04-01
A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1972-01-01
New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.
International Nuclear Information System (INIS)
Courageot, Estelle
2010-01-01
After a description of the context of radiological accidents (definition, history, context, exposure types, associated clinic symptoms of irradiation and contamination, medical treatment, return on experience) and a presentation of dose assessment in the case of external exposure (clinic, biological and physical dosimetry), this research thesis describes the principles of numerical reconstruction of a radiological accident, presents some computation codes (Monte Carlo code, MCNPX code) and the SESAME tool, and reports an application to an actual case (an accident which occurred in Equator in April 2009). The next part reports the developments performed to modify the posture of voxelized phantoms and the experimental and numerical validations. The last part reports a feasibility study for the reconstruction of radiological accidents occurring in external radiotherapy. This work is based on a Monte Carlo simulation of a linear accelerator, with the aim of identifying the most relevant parameters to be implemented in SESAME in the case of external radiotherapy
International Nuclear Information System (INIS)
Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T
2006-01-01
Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems
Bao, Weizhu
2013-01-01
We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three-dimensional (3D) Gross-Pitaevskii equation (GPE) with an angular momentum rotation term and/or long-range DDI, state the twodimensional (2D) GPE obtained from the 3D GPE via dimension reduction under anisotropic external potential, and review some dynamical laws related to the 2D and 3D GPEs. By introducing a rotating Lagrangian coordinate system, the original GPEs are reformulated to GPEs without the angular momentum rotation, which is replaced by a time-dependent potential in the new coordinate system. We then cast the conserved quantities and dynamical laws in the new rotating Lagrangian coordinates. Based on the new formulation of the GPE for rotating BECs in the rotating Lagrangian coordinates, a time-splitting spectral method is presented for computing the dynamics of rotating BECs. The new numerical method is explicit, simple to implement, unconditionally stable, and very efficient in computation. It is spectral-order accurate in space and second-order accurate in time and conserves the mass on the discrete level. We compare our method with some representative methods in the literature to demonstrate its efficiency and accuracy. In addition, the numerical method is applied to test the dynamical laws of rotating BECs such as the dynamics of condensate width, angular momentum expectation, and center of mass, and to investigate numerically the dynamics and interaction of quantized vortex lattices in rotating BECs without or with the long-range DDI.Copyright © by SIAM.
On the numerical computation of Q.C.D. mass spectrum: an introduction
International Nuclear Information System (INIS)
Marinari, E.
1983-09-01
Exploiting MC techniques for analyzing a lattice gauge theory coupled to fermionic matter fields, we just quote here the 3 main difficulties: first the anticommuting character of the fermionic fields, implying a strong non locality of the effective action, obtained by integrating out the fermionic fields. The second point is that the bare quark mass is not allowed, on a finite lattice, to be arbitrarily small: it is controlling the correlation length of the fermionic sector of the theory. The possible way out consists in working with unphysically large quark masses, and eventually trying to extrapolate. Last we have to face the so called doubling problem: the discretization of the continuum theory makes a number of unwanted fermionic species appearing, and/or explicitly breaks the chiral invariance of the theory
Rigoni, Daniele; Morganti, Francesca; Braibanti, Paride
2017-01-01
Facing a stressor involves a cardiac vagal tone response and a feedback effect produced by social interaction in visceral regulation. This study evaluated the contribution of baseline vagal tone and of social engagement system (SES) functioning on the ability to deal with a stressor. Participants ( n = 70) were grouped into a minimized social interaction condition (procedure administered through a PC) and a social interaction condition (procedure administered by an experimenter). The State Trait Anxiety Inventory, the Social Interaction Anxiety Scale, the Emotion Regulation Questionnaire and a debriefing questionnaire were completed by the subjects. The baseline vagal tone was registered during the baseline, stressor and recovery phases. The collected results highlighted a significant effect of the baseline vagal tone on vagal suppression. No effect of minimized vs. social interaction conditions on cardiac vagal tone during stressor and recovery phases was detected. Cardiac vagal tone and the results of the questionnaires appear to be not correlated. The study highlighted the main role of baseline vagal tone on visceral regulation. Some remarks on SES to be deepen in further research were raised.
Directory of Open Access Journals (Sweden)
Daniele Rigoni
2017-11-01
Full Text Available Facing a stressor involves a cardiac vagal tone response and a feedback effect produced by social interaction in visceral regulation. This study evaluated the contribution of baseline vagal tone and of social engagement system (SES functioning on the ability to deal with a stressor. Participants (n = 70 were grouped into a minimized social interaction condition (procedure administered through a PC and a social interaction condition (procedure administered by an experimenter. The State Trait Anxiety Inventory, the Social Interaction Anxiety Scale, the Emotion Regulation Questionnaire and a debriefing questionnaire were completed by the subjects. The baseline vagal tone was registered during the baseline, stressor and recovery phases. The collected results highlighted a significant effect of the baseline vagal tone on vagal suppression. No effect of minimized vs. social interaction conditions on cardiac vagal tone during stressor and recovery phases was detected. Cardiac vagal tone and the results of the questionnaires appear to be not correlated. The study highlighted the main role of baseline vagal tone on visceral regulation. Some remarks on SES to be deepen in further research were raised.
Physical models and numerical methods of the reactor dynamic computer program RETRAN
International Nuclear Information System (INIS)
Kamelander, G.; Woloch, F.; Sdouz, G.; Koinig, H.
1984-03-01
This report describes the physical models and the numerical methods of the reactor dynamic code RETRAN simulating reactivity transients in Light-Water-Reactors. The neutron-physical part of RETRAN bases on the two-group-diffusion equations which are solved by discretization similar to the TWIGL-method. An exponential transformation is applied and the inner iterations are accelerated by a coarse-mesh-rebalancing procedure. The thermo-hydraulic model approximates the equation of state by a built-in steam-water-table and disposes of options for the calculation of heat-conduction coefficients and heat transfer coefficients. (Author) [de
Computation of Nonlinear Backscattering Using a High-Order Numerical Method
Fibich, G.; Ilan, B.; Tsynkov, S.
2001-01-01
The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.
International Nuclear Information System (INIS)
Iooss, B.
2009-01-01
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
International Nuclear Information System (INIS)
Geroyannis, V.S.
1990-01-01
In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs
Directory of Open Access Journals (Sweden)
Tanja eKäser
2013-08-01
Full Text Available This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child’s individual learning and knowledge profile. 32 children with difficulties in learning mathematics completed the 6 to 12-weeks computer training. The children played the game for 20 minutes per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities.
International Nuclear Information System (INIS)
Abe, H.; Okuda, H.
1994-06-01
We study linear and nonlinear properties of a new computer simulation model developed to study the propagation of electromagnetic waves in a dielectric medium in the linear and nonlinear regimes. The model is constructed by combining a microscopic model used in the semi-classical approximation for the dielectric media and the particle model developed for the plasma simulations. It is shown that the model may be useful for studying linear and nonlinear wave propagation in the dielectric media
An efficient and general numerical method to compute steady uniform vortices
Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.
2011-07-01
Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.
Energy Technology Data Exchange (ETDEWEB)
Li, W.; Wan, Z.; Jiang, F.; Jia, P. [Beijing Science and Technology University, Beijing (China)
2008-07-01
Stability control of the head face's top-coal is one of the key techniques of realising high production and high efficiency in coal mining in fully mechanized top-coal caving face. The characteristics of the stress in the overlying strata of the fully mechanized top-coal caving face and the top coal were analysed using FLAC{sup 3D}. The results show that the tip-to-face top-coal generates a large deformation while it is in the stress-relaxed area. The top-coal in the front of the wall appears to be the failure area for the effect of the abutment pressure that spreads over the coal seam. The surrounding rock of the upper face end is the key part strengthened the control of the rib spalling. The first and frequent appearing phenomenon of losing stability of the powered supports is that the back base of the hydraulic power supports in the top of the face slips. Increasing the quality of support and so on can maintain the stability of surrounding rock. 4 refs., 7 figs., 1 tab.
1989-01-01
IJ-1_1 - from which we deduce: H U 1/ f II Hu A//- + 2M AtAr , and indeed the expected estimate : // un+l //_ lluo/ + (2MT) Ax since nAt _9 T...the propa- gation of a planar premixed flame with one-step chemistry . In this case, diffusive and reactive terms are added to the energy and species...to use exceedingly fine computational scales, to resolve the chemistry and internal fluid layers fully (which would normally be prohibitive in a large
International Nuclear Information System (INIS)
Abe, H.; Okuda, H.
1994-06-01
Soliton propagation in the dielectric media has been simulated by using the nonlinear Lorentz computational model, which was recently developed to study the propagation of electromagnetic waves in a linear and a nonlinear dielectric. The model is constructed by combining a microscopic model used in the semi-classical approximation for dielectric media and the particle model developed for the plasma simulations. The carrier wave frequency is retained in the simulation so that not only the envelope of the soliton but also its phase can be followed in time. It is shown that the model may be useful for studying pulse propagation in the dielectric media
International Nuclear Information System (INIS)
Corsi, F.
1985-01-01
In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned
Dhawan, Anuj; Norton, Stephen J; Gerhold, Michael D; Vo-Dinh, Tuan
2009-06-08
This paper describes a comparative study of finite-difference time-domain (FDTD) and analytical evaluations of electromagnetic fields in the vicinity of dimers of metallic nanospheres of plasmonics-active metals. The results of these two computational methods, to determine electromagnetic field enhancement in the region often referred to as "hot spots" between the two nanospheres forming the dimer, were compared and a strong correlation observed for gold dimers. The analytical evaluation involved the use of the spherical-harmonic addition theorem to relate the multipole expansion coefficients between the two nanospheres. In these evaluations, the spacing between two nanospheres forming the dimer was varied to obtain the effect of nanoparticle spacing on the electromagnetic fields in the regions between the nanostructures. Gold and silver were the metals investigated in our work as they exhibit substantial plasmon resonance properties in the ultraviolet, visible, and near-infrared spectral regimes. The results indicate excellent correlation between the two computational methods, especially for gold nanosphere dimers with only a 5-10% difference between the two methods. The effect of varying the diameters of the nanospheres forming the dimer, on the electromagnetic field enhancement, was also studied.
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Numerical computation of underwater explosions due to fuel-coolant interactions
International Nuclear Information System (INIS)
Lee, J.H.S.; Frost, D.L.; Knystautas, R.; Teodorczyk, A.; Ciccarelli, G.; Thibault, P.; Penrose, J.
1989-03-01
If coarse molten material is released into a coolant the possibility exists for a violent steam explosion. A detailed quantitative description of the processes involved in steam explosions is currently beyond the capabilities of the scientific community. However, a conservative estimate of the pressure transients resulting from a steam explosion can be obtained by studying the dynamics of the shock associated with the expansion of a high-pressure vapour bubble. In this study, the hydrodynamic equations governing the shock propagation of an expanding bubble were integrated numerically using the Flux Corrected Transport code. Simpler acoustic models based on experience with underwater explosions were also developed and used to estimate pressure transients and to calculate the peak pressures for benchmark cases. The results were found to be an order of magnitude higher than the corresponding pressures obtained using a complex model developed by Henry. A simplified version of the Henry model was developed by neglecting the complex description of the two-phase flow inside the ruptured tube and the arbitrarily assumed heat transfer and condensation rates. Results from the simplified model were found to be generally similar to, but had higher peak pressures than those obtained using the Henry model. It is concluded that the results produced by simple acoustic models, or by a simplified Henry model, are more conservative than the corresponding results obtained with the original Henry model
Directory of Open Access Journals (Sweden)
Nikesh S. Dattani
2012-03-01
Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.
Directory of Open Access Journals (Sweden)
Jiang Lei
2015-01-01
Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.
Computational Analysis of Igbo Numerals in a Number-to-text Conversion System
Directory of Open Access Journals (Sweden)
Olufemi Deborah NINAN
2017-12-01
Full Text Available System for converting Arabic numerals to their textual equivalence is an important tool in Natural Language processing (NLP especially in high-level speech processing and machine translation. Such system is scarcely available for most African languages including the Igbo language. This translation system is essential as Igbo language is one of the three major Nigerian languages feared to be among the endangered African languages. The system was designed using sequence as well as activity diagram and implemented using the python programming language and PyQt. The qualitative evaluation was done by administering questionnaires to selected native Igbo speakers and experts to provide preferred representation of some random numbers. The responses were compared with the output of the system. The result of the qualitative evaluation showed that the system was able to generate correct and accurate representations for Arabic numbers between 1-1000 in Igbo language being the scope of this study. The resulting system can serve as an effective teaching and learning tool of the Igbo language.
Computer programs for the numerical modelling of water flow in rock masses
International Nuclear Information System (INIS)
Croney, P.; Richards, L.R.
1985-08-01
Water flow in rock joints provides a very important possible route for the migration of radio-nuclides from radio-active waste within a repository back to the biosphere. Two computer programs DAPHNE and FPM have been developed to model two dimensional fluid flow in jointed rock masses. They have been developed to run on microcomputer systems suitable for field locations. The fluid flows in a number of jointed rock systems have been examined and certain controlling functions identified. A methodology has been developed for assessing the anisotropic permeability of jointed rock. A number of examples of unconfined flow into surface and underground openings have been analysed and ground water lowering, pore water pressures and flow quantities predicted. (author)
Stochastic processes, multiscale modeling, and numerical methods for computational cellular biology
2017-01-01
This book focuses on the modeling and mathematical analysis of stochastic dynamical systems along with their simulations. The collected chapters will review fundamental and current topics and approaches to dynamical systems in cellular biology. This text aims to develop improved mathematical and computational methods with which to study biological processes. At the scale of a single cell, stochasticity becomes important due to low copy numbers of biological molecules, such as mRNA and proteins that take part in biochemical reactions driving cellular processes. When trying to describe such biological processes, the traditional deterministic models are often inadequate, precisely because of these low copy numbers. This book presents stochastic models, which are necessary to account for small particle numbers and extrinsic noise sources. The complexity of these models depend upon whether the biochemical reactions are diffusion-limited or reaction-limited. In the former case, one needs to adopt the framework of s...
Numerical computation of the linear stability of the diffusion model for crystal growth simulation
Energy Technology Data Exchange (ETDEWEB)
Yang, C.; Sorensen, D.C. [Rice Univ., Houston, TX (United States); Meiron, D.I.; Wedeman, B. [California Institute of Technology, Pasadena, CA (United States)
1996-12-31
We consider a computational scheme for determining the linear stability of a diffusion model arising from the simulation of crystal growth. The process of a needle crystal solidifying into some undercooled liquid can be described by the dual diffusion equations with appropriate initial and boundary conditions. Here U{sub t} and U{sub a} denote the temperature of the liquid and solid respectively, and {alpha} represents the thermal diffusivity. At the solid-liquid interface, the motion of the interface denoted by r and the temperature field are related by the conservation relation where n is the unit outward pointing normal to the interface. A basic stationary solution to this free boundary problem can be obtained by writing the equations of motion in a moving frame and transforming the problem to parabolic coordinates. This is known as the Ivantsov parabola solution. Linear stability theory applied to this stationary solution gives rise to an eigenvalue problem of the form.
Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C
2017-12-19
Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.
Scott, L Ridgway
2011-01-01
Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...
Directory of Open Access Journals (Sweden)
Miroslav Kališnik
2011-05-01
Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.
Numerical and Computational Analysis of a New Vertical Axis Wind Turbine, Named KIONAS
Directory of Open Access Journals (Sweden)
Eleni Douvi
2017-01-01
Full Text Available This paper concentrates on a new configuration for a wind turbine, named KIONAS. The main purpose is to determine the performance and aerodynamic behavior of KIONAS, which is a vertical axis wind turbine with a stator over the rotor and a special feature in that it can consist of several stages. Notably, the stator is shaped in such a way that it increases the velocity of the air impacting the rotor blades. Moreover, each stage’s performance can be increased with the increase of the total number of stages. The effects of wind velocity, the various numbers of inclined rotor blades, the rotor diameter, the stator’s shape and the number of stages on the performance of KIONAS were studied. A FORTRAN code was developed in order to predict the power in several cases by solving the equations of continuity and momentum. Subsequently, further knowledge on the flow field was obtained by using a commercial Computational Fluid Dynamics code. Based on the results, it can be concluded that higher wind velocities and a greater number of blades produce more power. Furthermore, higher performance was found for a stator with curved guide vanes and for a KIONAS configuration with more stages.
Bringing numerous methods for expression and promoter analysis to a public cloud computing service.
Polanski, Krzysztof; Gao, Bo; Mason, Sam A; Brown, Paul; Ott, Sascha; Denby, Katherine J; Wild, David L
2018-03-01
Every year, a large number of novel algorithms are introduced to the scientific community for a myriad of applications, but using these across different research groups is often troublesome, due to suboptimal implementations and specific dependency requirements. This does not have to be the case, as public cloud computing services can easily house tractable implementations within self-contained dependency environments, making the methods easily accessible to a wider public. We have taken 14 popular methods, the majority related to expression data or promoter analysis, developed these up to a good implementation standard and housed the tools in isolated Docker containers which we integrated into the CyVerse Discovery Environment, making these easily usable for a wide community as part of the CyVerse UK project. The integrated apps can be found at http://www.cyverse.org/discovery-environment, while the raw code is available at https://github.com/cyversewarwick and the corresponding Docker images are housed at https://hub.docker.com/r/cyversewarwick/. info@cyverse.warwick.ac.uk or D.L.Wild@warwick.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
International Nuclear Information System (INIS)
Lee, Soon-Hwan; Chino, Masamichi
2000-01-01
The coupling between atmosphere and ocean model has physical and computational difficulties for short-term forecasting of weather and ocean current. In this research, a combination system between high-resolution meso-scale atmospheric model and ocean model has been constructed using a new message-passing library, called Stampi (Seamless Thinking Aid Message Passing Interface), for prediction of particle dispersion at emergency nuclear accident. Stampi, which is based on the MPI (Message Passing Interface) 2 specification, makes us carry out parallel calculations of combination system without parallelization skill to model code. And it realizes dynamic process creation on different machines and communication between spawned one within the scope of MPI semantics. The models included in this combination system are PHYSIC as an atmosphere model, and POM (Princeton Ocean Model) as an ocean model. We applied this combination system to predict sea surface current at Sea of Japan in winter season. Simulation results indicate that the wind stress near the sea surface tends to be a predominant factor to determine surface ocean currents and dispersion of radioactive contamination in the ocean. The surface ocean current is well correspondent with wind direction, induced by high mountains at North Korea. The satellite data of NSCAT (NASA-SCATterometer), which is an image of sea surface current, also agrees well with the results of this system. (author)
Fukushima, Toshio
2017-06-01
Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface
Quantum Computing's Classical Problem, Classical Computing's Quantum Problem
Van Meter, Rodney
2013-01-01
Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classica...
Directory of Open Access Journals (Sweden)
Nancy Noemí Terroni
2009-04-01
Full Text Available En este trabajo se reportan los resultados del análisis reticular en la comunicación, y los puntajes de la asertividad del discurso de los partipantes de pequeños grupos que resuelven una tarea de recuperación de memoria (La guerra de los fantasmas, Bartlett, 1932. El diseño es cuasiexperimental y los 90 participantes alumnos de la Universidad Nacional de Mar del Plata, debieron reconstruir la misma en forma grupal colaborativa. Los sujetos fueron asignados aleatoriamente a los grupos y a las dos condiciones (grupos cara a cara y mediados por computadora. Se registraron las interacciones presenciales mediante video filmaciones y las electrónicas quedaron almacenadas en el canal de chat. En general, la asertividad del discurso y la comunicación presentaron asociaciones significativas, con algunas diferencias según el canal comunicacional empleado. Se discuten estos resultados con relación al tipo de tarea y a las restricciones de los medios electrónicos.This work reports the results of reticular analysis in communication, and the scores from the discourse assertiveness of the participants of small groups who solve a recall memory task (The war of the ghosts, Bartlett, 1932. The design is a quasi-experimental one and the 90 subjects, students from Mar del Plata University had to reconstruct the same story in collaborative groups. The subjects were assigned in an aleatory way, to both conditions (face to face and computer-mediated groups. The subjects' interactions in face-to-face communication groups were recorded in video films and the electronic ones were stored in the chat channel. In general, the discourse assertiveness and the communication presented significant associations, with some differences according to the communication channel used. These results are discussed about the type of task and the restrictions of the electronic media.
Salomons, E.M.
1999-01-01
Downwind sound propagation over a noise screen is investigated by numerical computations and scale model experiments in a wind tunnel. For the computations, the parabolic equation method is used, with a range-dependent sound-speed profile based on wind-speed profiles measured in the wind tunnel and
International Nuclear Information System (INIS)
Rizzo, S.; Tomarchio, E.
2008-01-01
Full text: The analytical relations used to compute the coincidence-summing effects on spectral response of Ge semiconductor detectors are quite complex and involve full-energy peak and total efficiencies. For point-sources, a general method for calculating the correction factors for gamma ray coincidences has been formulated by Andreev et al. and used by Schima and Hoppes to obtain γ-X K coincidence correction expressions for 17 nuclides. However, because the higher-order terms are neglected, the expressions supplied do not give reliable results in the case of short sample-detector distances. Using the formulae given by Morel et al.[3] and Lepy et al.[4], we have developed a computer program able to get numerical expressions to compute γ-γ e γ-X K coincidence summing corrections for point sources. Only full-energy peak and total efficiencies are needed. Alternatively, values of peak-to-total ratio can be introduced. For extended sources, the same expressions can be always considered with the introduction of 'effective efficiencies' as defined by Arnold and Sima, i.e. an average over the source volume of the spatial distribution of the elementary photon source total efficiency, weighted by the corresponding peak efficiency. We have considered the most used calibration radioisotopes as well as fission products, activation products and environmental isotopes. All decay data were taken from the most recent volumes of 'Table of Radionuclides', CEA Monographie BIPM-5 and a suitable matrix representation of a decay scheme was adopted. For the sake of brevity, we provide for each nuclide a set of expressions for the more intense gamma emissions, considered sufficient for most applications. However, numerical expressions are available for all the stored gamma transitions and can be obtained on request. As examples of the use of the expressions, the evaluation of correction values for point sources and a particulate sample reduced to a 6x6x0.7 cm packet - with reference
Yamada, Takemine; Ichimura, Tsuyoshi; Hori, Muneo; Dobashi, Hiroshi; Ohbo, Naoto
Quasi non-linear 3D FEM earthquake response analysises with level-2 earthquake are conducted for a ramp tunnel structure of Tokyo metropolitan express way central circular line the Yamate tunnel. Large-scale numerical computation with solid elements is highly required for examination of seismic response of large tunnel in case of level-2 earthquake. The results are obtained as follows: i) In level-2 earthquake, stress concentration in ramp tunnel becomes great near geological interface between two layers of high impedance contrast. ii) The response is not obtained as a superposition of two-dimensional responses which is an assumption in conventional design methods because the distribution of displacements in the direction of tunnel axis at cross-section of ramp tunnel structure near geological interface does not linearly distribute. iii) Evaluation of stress in addition to section force is desirable for the correct evaluation of the three-dimensional response of tunnel structure.
Directory of Open Access Journals (Sweden)
Alexander Lopato
2018-01-01
Full Text Available The work is dedicated to the numerical study of detonation wave initiation and propagation in the variable cross-section axisymmetric channel filled with the model hydrogen-air mixture. The channel models the large-scale device for the utilization of worn-out tires. Mathematical model is based on two-dimensional axisymmetric Euler equations supplemented by global chemical kinetics model. The finite volume computational algorithm of the second approximation order for the calculation of two-dimensional flows with detonation waves on fully unstructured grids with triangular cells is developed. Three geometrical configurations of the channel are investigated, each with its own degree of the divergence of the conical part of the channel from the point of view of the pressure from the detonation wave on the end wall of the channel. The problem in consideration relates to the problem of waste recycling in the devices based on the detonation combustion of the fuel.
Numerical methods in software and analysis
Rice, John R
1992-01-01
Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm
Energy Technology Data Exchange (ETDEWEB)
Doroszko, M., E-mail: m.doroszko@pb.edu.pl; Seweryn, A., E-mail: a.seweryn@pb.edu.pl
2017-03-24
Microtomographic devices have limited imaging accuracy and are often insufficient for proper mapping of small details of real objects (e.g. elements of material mesostructures). This paper describes a new method developed to compensate the effect of X-ray computed microtomography (micro-CT) inaccuracy in numerical modelling of the deformation process of porous sintered 316 L steel. The method involves modification of microtomographic images where the pore shapes are separated. The modification consists of the reconstruction of fissures and small pores omitted by micro-CT scanning due to the limited accuracy of the measuring device. It enables proper modelling of the tensile deformation process of porous materials. In addition, the proposed approach is compared to methods described in the available literature. As a result of numerical calculations, stress and strain distributions were obtained in deformed sintered 316 L steel. Based on the results, macroscopic stress-strain curves were received. Maximum principal stress distributions obtained by the proposed calculation model, indicated specific locations, where the stress reached a critical value, and fracture initiation occurred. These are bridges with small cross sections and notches in the shape of pores. Based on calculation results, the influence of the deformation mechanism of the material porous mesostructures on their properties at the macroscale is described.
Energy Technology Data Exchange (ETDEWEB)
Pebay, Philippe [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Terriberry, Timothy B. [Xiph.Org Foundation, Arlington, VA (United States); Kolla, Hemanth [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Bennett, Janine [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)
2016-03-29
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.
International Nuclear Information System (INIS)
Doroszko, M.; Seweryn, A.
2017-01-01
Microtomographic devices have limited imaging accuracy and are often insufficient for proper mapping of small details of real objects (e.g. elements of material mesostructures). This paper describes a new method developed to compensate the effect of X-ray computed microtomography (micro-CT) inaccuracy in numerical modelling of the deformation process of porous sintered 316 L steel. The method involves modification of microtomographic images where the pore shapes are separated. The modification consists of the reconstruction of fissures and small pores omitted by micro-CT scanning due to the limited accuracy of the measuring device. It enables proper modelling of the tensile deformation process of porous materials. In addition, the proposed approach is compared to methods described in the available literature. As a result of numerical calculations, stress and strain distributions were obtained in deformed sintered 316 L steel. Based on the results, macroscopic stress-strain curves were received. Maximum principal stress distributions obtained by the proposed calculation model, indicated specific locations, where the stress reached a critical value, and fracture initiation occurred. These are bridges with small cross sections and notches in the shape of pores. Based on calculation results, the influence of the deformation mechanism of the material porous mesostructures on their properties at the macroscale is described.
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H
2008-01-01
A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.
Directory of Open Access Journals (Sweden)
Bin Xu
2014-01-01
Full Text Available After the Wenchuan earthquake in 2008, the Zipingpu concrete faced rockfill dam (CFRD was found slabs dislocation between different stages slabs and the maximum value reached 17 cm. This is a new damage pattern and did not occur in previous seismic damage investigation. Slabs dislocation will affect the seepage control system of the CFRD gravely and even the safety of the dam. Therefore, investigations of the slabs dislocation’s mechanism and development might be meaningful to the engineering design of the CFRD. In this study, based on the previous studies by the authors, the slabs dislocation phenomenon of the Zipingpu CFRD was investigated. The procedure and constitutive model of materials used for finite element analysis are consistent. The water elevation, the angel, and the strength of the construction joints were among major variables of investigation. The results indicated that the finite element procedure based on a modified generalized plasticity model and a perfect elastoplastic interface model can be used to evaluate the dislocation damage of face slabs of concrete faced rockfill dam during earthquake. The effects of the water elevation, the angel, and the strength of the construction joints are issues of major design concern under seismic loading.
Robert Leckey
2013-01-01
This paper uses Queer theory, specifically literature on Bowers v. Hardwick, to analyze debates over legislation proposed in Quebec regarding covered faces. Queer theory sheds light on legal responses to the veil. Parliamentary debates in Quebec reconstitute the polity, notably as secular and united. The paper highlights the contradictory and unstable character of four binaries: legislative text versus social practice, act versus status, majority versus minority, and knowable versus unknowabl...
Yeganeh, N; Dillavou, C; Simon, M; Gorbach, P; Santos, B; Fonseca, R; Saraiva, J; Melo, M; Nielsen-Saines, K
2013-04-01
Audio computer-assisted survey instrument (ACASI) has been shown to decrease under-reporting of socially undesirable behaviours, but has not been evaluated in pregnant women at risk of HIV acquisition in Brazil. We assigned HIV-negative pregnant women receiving routine antenatal care at in Porto Alegre, Brazil and their partners to receive a survey regarding high-risk sexual behaviours and drug use via ACASI (n = 372) or face-to-face (FTF) (n = 283) interviews. Logistic regression showed that compared with FTF, pregnant women interviewed via ACASI were significantly more likely to self-report themselves as single (14% versus 6%), having >5 sexual partners (35% versus 29%), having oral sex (42% versus 35%), using intravenous drugs (5% versus 0), smoking cigarettes (23% versus 16%), drinking alcohol (13% versus 8%) and using condoms during pregnancy (32% versus 17%). Therefore, ACASI may be a useful method in assessing risk behaviours in pregnant women, especially in relation to drug and alcohol use.
International Nuclear Information System (INIS)
Kim, Jungkwun; Allen, Mark G; Yoon, Yong-Kyu
2016-01-01
This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array. (paper)
Energy Technology Data Exchange (ETDEWEB)
Spoerl, Andreas
2008-06-05
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
International Nuclear Information System (INIS)
Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.
1985-09-01
Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs
Enquist, Magnus; Ghirlanda, Stefano
1998-01-01
This is a comment on an article by Perrett et al., on the same issue of Nature, investigating face perception. With computer graphics, Perrett and colleagues have produced exaggerated male and female faces, and asked people to rate them with respect to femininity or masculinity, and personality traits such as intelligence, emotionality and so on. The key question is: what informations do faces (and sexual signals in general) convey? One view, supported by Perrett and colleagues, is that all a...
Dodig, H.
2017-11-01
This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.
Directory of Open Access Journals (Sweden)
Robert Leckey
2013-12-01
Full Text Available This paper uses Queer theory, specifically literature on Bowers v. Hardwick, to analyze debates over legislation proposed in Quebec regarding covered faces. Queer theory sheds light on legal responses to the veil. Parliamentary debates in Quebec reconstitute the polity, notably as secular and united. The paper highlights the contradictory and unstable character of four binaries: legislative text versus social practice, act versus status, majority versus minority, and knowable versus unknowable. As with contradictory propositions about homosexuality, contradiction does not undermine discourse but makes it stronger and more agile. Este artículo utiliza la teoría Queer, más concretamente la literatura sobre Bowers vs. Hardwick, para analizar los debates sobre la legislación propuesta en Quebec en relación al velo. La teoría Queer arroja luz sobre las respuestas legales al velo. Los debates parlamentarios en Quebec reconstituyen la forma de gobierno, especialmente como secular y unido. El documento pone de relieve el carácter contradictorio e inestable de cuatro binarios: texto legislativo frente a las prácticas sociales; legislación frente a estado; mayoría versus minoría; y conocible frente a incognoscible. Al igual que con las proposiciones contradictorias acerca de la homosexualidad, la contradicción no socava el discurso, sino que lo hace más fuerte y más ágil.
Global communication schemes for the numerical solution of high-dimensional PDEs
DEFF Research Database (Denmark)
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...
International Nuclear Information System (INIS)
Bertelli, Felipe; Cheung, Noé; Ferreira, Ivaldo L.; Garcia, Amauri
2016-01-01
Highlights: • A numerical routine coupled to a computational thermodynamics software is proposed to calculate thermophysical properties. • The approach encompasses numerical and experimental simulation of solidification. • Al–Sn–Si alloys thermophysical properties are validated by experimental/numerical cooling rate results. - Abstract: Modelling of manufacturing processes of multicomponent Al-based alloys products, such as casting, requires thermophysical properties that are rarely found in the literature. It is extremely important to use reliable values of such properties, as they can influence critically on simulated output results. In the present study, a numerical routine is developed and connected in real runtime execution to a computational thermodynamic software with a view to permitting thermophysical properties such as: latent heats; specific heats; temperatures and heats of transformation; phase fractions and composition and density of Al–Sn–Si alloys as a function of temperature, to be determined. A numerical solidification model is used to run solidification simulations of ternary Al-based alloys using the appropriate calculated thermophysical properties. Directional solidification experiments are carried out with two Al–Sn–Si alloys compositions to provide experimental cooling rates profiles along the length of the castings, which are compared with numerical simulations in order to validate the calculated thermophysical data. For both cases a good agreement can be observed, indicating the relevance of applicability of the proposed approach.
Daly, Keith R; Mooney, Sacha J; Bennett, Malcolm J; Crout, Neil M J; Roose, Tiina; Tracy, Saoirse R
2015-04-01
Understanding the dynamics of water distribution in soil is crucial for enhancing our knowledge of managing soil and water resources. The application of X-ray computed tomography (CT) to the plant and soil sciences is now well established. However, few studies have utilized the technique for visualizing water in soil pore spaces. Here this method is utilized to visualize the water in soil in situ and in three-dimensions at successive reductive matric potentials in bulk and rhizosphere soil. The measurements are combined with numerical modelling to determine the unsaturated hydraulic conductivity, providing a complete picture of the hydraulic properties of the soil. The technique was performed on soil cores that were sampled adjacent to established roots (rhizosphere soil) and from soil that had not been influenced by roots (bulk soil). A water release curve was obtained for the different soil types using measurements of their pore geometries derived from CT imaging and verified using conventional methods, such as pressure plates. The water, soil, and air phases from the images were segmented and quantified using image analysis. The water release characteristics obtained for the contrasting soils showed clear differences in hydraulic properties between rhizosphere and bulk soil, especially in clay soil. The data suggest that soils influenced by roots (rhizosphere soil) are less porous due to increased aggregation when compared with bulk soil. The information and insights obtained on the hydraulic properties of rhizosphere and bulk soil will enhance our understanding of rhizosphere biophysics and improve current water uptake models. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Raghuwanshi, Sanjeev Kumar; Palodiya, Vikram
2017-08-01
Waveguide dispersion can be tailored but not the material dispersion. Hence, the total dispersion can be shifted at any desired band by adjusting the waveguide dispersion. Waveguide dispersion is proportional to {d^2}β/d{k^2} and need to be computed numerically. In this paper, we have tried to compute analytical expression for {d^2}β/d{k^2} in terms of {d^2}β/d{k^2} accurately with numerical technique, ≈ 10^{-5} decimal point. This constraint sometimes generates the error in calculation of waveguide dispersion. To formulate the problem we will use the graphical method. Our study reveals that we can compute the waveguide dispersion enough accurately for various modes by knowing - β only.
Hashemi, Sepehr; Armand, Mehran; Gordon, Chad R
2016-10-01
To describe the development and refinement of the computer-assisted planning and execution (CAPE) system for use in face-jaw-teeth transplants (FJTTs). Although successful, some maxillofacial transplants result in suboptimal hybrid occlusion and may require subsequent surgical orthognathic revisions. Unfortunately, the use of traditional dental casts and splints pose several compromising shortcomings in the context of FJTT and hybrid occlusion. Computer-assisted surgery may overcome these challenges. Therefore, the use of computer-assisted orthognathic techniques and functional planning may prevent the need for such revisions and improve facial-skeletal outcomes. A comprehensive CAPE system for use in FJTT was developed through a multicenter collaboration and refined using plastic models, live miniature swine surgery, and human cadaver models. The system marries preoperative surgical planning and intraoperative execution by allowing on-table navigation of the donor fragment relative to recipient cranium, and real-time reporting of patient's cephalometric measurements relative to a desired dental-skeletal outcome. FJTTs using live-animal and cadaveric models demonstrate the CAPE system to be accurate in navigation and beneficial in improving hybrid occlusion and other craniofacial outcomes. Future refinement of the CAPE system includes integration of more commonly performed orthognathic/maxillofacial procedures.
Glowinski, R; Kuznetsov, Y A; Periaux, Jacques; Neittaanmaki, Pekka; Pironneau, Olivier
2010-01-01
Standing at the intersection of mathematics and scientific computing, this collection of state-of-the-art papers in nonlinear PDEs examines their applications to subjects as diverse as dynamical systems, computational mechanics, and the mathematics of finance.
Directory of Open Access Journals (Sweden)
N. Calonne
2012-09-01
Full Text Available We used three-dimensional (3-D images of snow microstructure to carry out numerical estimations of the full tensor of the intrinsic permeability of snow (K. This study was performed on 35 snow samples, spanning a wide range of seasonal snow types. For several snow samples, a significant anisotropy of permeability was detected and is consistent with that observed for the effective thermal conductivity obtained from the same samples. The anisotropy coefficient, defined as the ratio of the vertical over the horizontal components of K, ranges from 0.74 for a sample of decomposing precipitation particles collected in the field to 1.66 for a depth hoar specimen. Because the permeability is related to a characteristic length, we introduced a dimensionless tensor K*=K/r_{es}^{2}, where the equivalent sphere radius of ice grains (r_{es} is computed from the specific surface area of snow (SSA and the ice density (ρ_{i} as follows: r_{es}=3/(SSA×ρ_{i}. We define K and K* as the average of the diagonal components of K and K*, respectively. The 35 values of K* were fitted to snow density (ρ_{s} and provide the following regression: K = (3.0 ± 0.3 r_{es}^{2} exp((−0.0130 ± 0.0003ρ_{s}. We noted that the anisotropy of permeability does not affect significantly the proposed equation. This regression curve was applied to several independent datasets from the literature and compared to other existing regression curves or analytical models. The results show that it is probably the best currently available simple relationship linking the average value of permeability, K, to snow density and specific surface area.
Directory of Open Access Journals (Sweden)
Sylvia Adebajo
Full Text Available INTRODUCTION: Face-to-face (FTF interviews are the most frequently used means of obtaining information on sexual and drug injecting behaviours from men who have sex with men (MSM and men who inject drugs (MWID. However, accurate information on these behaviours may be difficult to elicit because of sociocultural hostility towards these populations and the criminalization associated with these behaviours. Audio computer assisted self-interview (ACASI is an interviewing technique that may mitigate social desirability bias in this context. METHODS: This study evaluated differences in the reporting of HIV-related risky behaviours by MSM and MWID using ACASI and FTF interviews. Between August and September 2010, 712 MSM and 328 MWID in Nigeria were randomized to either ACASI or FTF interview for completion of a behavioural survey that included questions on sensitive sexual and injecting risk behaviours. Data were analyzed separately for MSM and MWID. Logistic regression was run for each behaviour as a dependent variable to determine differences in reporting methods. RESULTS: MSM interviewed via ACASI reported significantly higher risky behaviours with both women (multiple female sexual partners 51% vs. 43%, p = 0.04; had unprotected anal sex with women 72% vs. 57%, p = 0.05 and men (multiple male sex partners 70% vs. 54%, p≤0.001 than through FTF. Additionally, they were more likely to self-identify as homosexual (AOR: 3.3, 95%CI:2.4-4.6 and report drug use in the past 12 months (AOR:40.0, 95%CI: 9.6-166.0. MWID interviewed with ACASI were more likely to report needle sharing (AOR:3.3, 95%CI:1.2-8.9 and re-use (AOR:2.2, 95%CI:1.2-3.9 in the past month and prior HIV testing (AOR:1.6, 95%CI 1.02-2.5. CONCLUSION: The feasibility of using ACASI in studies and clinics targeting key populations in Nigeria must be explored to increase the likelihood of obtaining more accurate data on high risk behaviours to inform improved risk reduction strategies
Adebajo, Sylvia; Obianwu, Otibho; Eluwa, George; Vu, Lung; Oginni, Ayo; Tun, Waimar; Sheehy, Meredith; Ahonsi, Babatunde; Bashorun, Adebobola; Idogho, Omokhudu; Karlyn, Andrew
2014-01-01
Face-to-face (FTF) interviews are the most frequently used means of obtaining information on sexual and drug injecting behaviours from men who have sex with men (MSM) and men who inject drugs (MWID). However, accurate information on these behaviours may be difficult to elicit because of sociocultural hostility towards these populations and the criminalization associated with these behaviours. Audio computer assisted self-interview (ACASI) is an interviewing technique that may mitigate social desirability bias in this context. This study evaluated differences in the reporting of HIV-related risky behaviours by MSM and MWID using ACASI and FTF interviews. Between August and September 2010, 712 MSM and 328 MWID in Nigeria were randomized to either ACASI or FTF interview for completion of a behavioural survey that included questions on sensitive sexual and injecting risk behaviours. Data were analyzed separately for MSM and MWID. Logistic regression was run for each behaviour as a dependent variable to determine differences in reporting methods. MSM interviewed via ACASI reported significantly higher risky behaviours with both women (multiple female sexual partners 51% vs. 43%, p = 0.04; had unprotected anal sex with women 72% vs. 57%, p = 0.05) and men (multiple male sex partners 70% vs. 54%, p≤0.001) than through FTF. Additionally, they were more likely to self-identify as homosexual (AOR: 3.3, 95%CI:2.4-4.6) and report drug use in the past 12 months (AOR:40.0, 95%CI: 9.6-166.0). MWID interviewed with ACASI were more likely to report needle sharing (AOR:3.3, 95%CI:1.2-8.9) and re-use (AOR:2.2, 95%CI:1.2-3.9) in the past month and prior HIV testing (AOR:1.6, 95%CI 1.02-2.5). The feasibility of using ACASI in studies and clinics targeting key populations in Nigeria must be explored to increase the likelihood of obtaining more accurate data on high risk behaviours to inform improved risk reduction strategies that reduce HIV transmission.
International Nuclear Information System (INIS)
Gupta, S.C.; Sikka, S.K.; Chidambaram, R.
1979-01-01
An account is given of a one-dimensional spherical symmetric computer code for the numerical simulation of the effects of peaceful underground nuclear explosions in rocks (OCENER). In the code, the nature of the stress field and response of the medium to this field are modelled numerically by finite difference form of the laws of continuum mechanics and the constitutive relations of the rock medium in which the detonation occurs. It enables to approximate well the cavity growth and fracturing of the surrounding rock for contained explosions and the events upto the time the spherical symmetry is valid for cratering-type explosions. (auth.)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Bao, Weizhu; Marahrens, Daniel; Tang, Qinglin; Zhang, Yanzhi
2013-01-01
We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three
Embedded Face Detection and Recognition
Directory of Open Access Journals (Sweden)
Göksel Günlü
2012-10-01
Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its
Brezinski, C
2012-01-01
Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<
International Nuclear Information System (INIS)
Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.
2011-01-01
Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
Energy Technology Data Exchange (ETDEWEB)
Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)
2011-08-20
Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
International Nuclear Information System (INIS)
Corge, Ch.
1969-01-01
Numerical analysis of transmission resonances induced by s wave neutrons in time-of-flight experiments can be achieved in a fairly automatic way on an IBM 7094/II computer. The involved computations are carried out following a four step scheme: 1 - experimental raw data are processed to obtain the resonant transmissions, 2 - values of experimental quantities for each resonance are derived from the above transmissions, 3 - resonance parameters are determined using a least square method to solve the over determined system obtained by equalling theoretical functions to the correspondent experimental values. Four analysis methods are gathered in the same code, 4 - graphical control of the results is performed. (author) [fr
T.M. Dunster (Mark); A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)
2017-01-01
textabstractConical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function
Numerical computation of space-charge fields of electron bunches in a beam pipe of elliptical shape
International Nuclear Information System (INIS)
Markovik, A.
2005-01-01
This work deals in particularly with 3D numerical simulations of space-charge fields from electron bunches in a beam pipe with elliptical cross-section. To obtain the space-charge fields it is necessary to calculate the Poisson equation with given boundary condition and space charge distribution. The discretization of the Poisson equation by the method of finite differences on a Cartesian grid, as well as setting up the coefficient matrix A for the elliptical domain are explained in the section 2. In the section 3 the properties of the coefficient matrix and possible numerical algorithms suitable for solving non-symmetrical linear systems of equations are introduced. In the following section 4, the applied solver algorithms are investigated by numerical tests with right hand side function for which the analytical solution is known. (orig.)
Numerical computation of space-charge fields of electron bunches in a beam pipe of elliptical shape
Energy Technology Data Exchange (ETDEWEB)
Markovik, A.
2005-09-28
This work deals in particularly with 3D numerical simulations of space-charge fields from electron bunches in a beam pipe with elliptical cross-section. To obtain the space-charge fields it is necessary to calculate the Poisson equation with given boundary condition and space charge distribution. The discretization of the Poisson equation by the method of finite differences on a Cartesian grid, as well as setting up the coefficient matrix A for the elliptical domain are explained in the section 2. In the section 3 the properties of the coefficient matrix and possible numerical algorithms suitable for solving non-symmetrical linear systems of equations are introduced. In the following section 4, the applied solver algorithms are investigated by numerical tests with right hand side function for which the analytical solution is known. (orig.)
The ideal flip-through impact: experimental and numerical investigation
DEFF Research Database (Denmark)
Bredmose, Henrik; Hunt-Raby, A.; Jayaratne, R.
2010-01-01
Results from a physical experiment and a numerical computation are compared for a flip-through type wave impact on a vertical face, typical of a seawall or breakwater. The physical wave was generated by application of the focused-wave group technique to the amplitudes of a JONSWAP spectrum, with ...
Full Text Available ... Basics PTSD Treatment What is AboutFace? Resources for Professionals Get Help Home Watch Videos by Topic Videos ... Basics PTSD Treatment What is AboutFace? Resources for Professionals Get Help PTSD We've been there. After ...
Full Text Available ... Treatment What is AboutFace? Resources for Professionals Get Help Home Watch Videos by Topic Videos by Type ... Treatment What is AboutFace? Resources for Professionals Get Help PTSD We've been there. After a traumatic ...
National Research Council Canada - National Science Library
Arroyo, Jose
2004-01-01
... of the barge train, the approach velocity, the approach angle, the barge train moment of inertia, damage sustained by the barge structure, and friction between the barge and the wall. computation...
Introduction to numerical analysis
Hildebrand, F B
1987-01-01
Well-known, respected introduction, updated to integrate concepts and procedures associated with computers. Computation, approximation, interpolation, numerical differentiation and integration, smoothing of data, other topics in lucid presentation. Includes 150 additional problems in this edition. Bibliography.
DEFF Research Database (Denmark)
Sørensen, Mette-Marie Zacher
2016-01-01
artist Marnix de Nijs' Physiognomic Scrutinizer is an interactive installation whereby the viewer's face is scanned and identified with historical figures. The American artist Zach Blas' project Fag Face Mask consists of three-dimensional portraits that blend biometric facial data from 30 gay men's faces...... and critically examine bias in surveillance technologies, as well as scientific investigations, regarding the stereotyping mode of the human gaze. The American artist Heather Dewey-Hagborg creates three-dimensional portraits of persons she has “identified” from their garbage. Her project from 2013 entitled...
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.; Shahbagian, R.R.; Zhidkov, E.P.
1991-01-01
A new method for numerical solution of the boundary problem for Schroedinger-like partial differential equations in R n is elaborated. The method is based on representation of multidimensional Green function in the form of multiple functional integral and on the use of approximation formulas which are constructed for such integrals. The convergence of approximations to the exact value is proved, the remainder of the formulas is estimated. Method reduces the initial differential problem to quadratures. 16 refs.; 7 tabs
Reading faces and Facing words
DEFF Research Database (Denmark)
Robotham, Julia Emma; Lindegaard, Martin Weis; Delfi, Tzvetelina Shentova
unilateral lesions, we found no patient with a selective deficit in either reading or face processing. Rather, the patients showing a deficit in processing either words or faces were also impaired with the other category. One patient performed within the normal range on all tasks. In addition, all patients......It has long been argued that perceptual processing of faces and words is largely independent, highly specialised and strongly lateralised. Studies of patients with either pure alexia or prosopagnosia have strongly contributed to this view. The aim of our study was to investigate how visual...... perception of faces and words is affected by unilateral posterior stroke. Two patients with lesions in their dominant hemisphere and two with lesions in their non-dominant hemisphere were tested on sensitive tests of face and word perception during the stable phase of recovery. Despite all patients having...
Giovannozzi, M; Høimyr, N; Jones, PL; Karneyeu, A; Marquina, MA; McIntosh, E; Segal, B; Skands, P; Grey, F; Lombraña González, D; Rivkin, L; Zacharov, I
2012-01-01
Recently, the LHC@home system has been revived at CERN. It is a volunteer computing system based on BOINC which boosts the available CPU-power in institutional computer centres with the help of individuals that donate the CPU-time of their PCs. Currently two projects are hosted on the system, namely SixTrack and Test4Theory. The first is aimed at performing beam dynamics simulations, while the latter deals with the simulation of high-energy events. In this paper the details of the global system, as well a discussion of the capabilities of each project will be presented.
Full Text Available Skip to Content Menu Closed (Tap to Open) Home Interviews Our Stories Search All Videos PTSD Basics PTSD Treatment What is AboutFace? Resources for Professionals Get Help Home Watch Interviews Our ...
Full Text Available ... not feeling better, you may have PTSD (posttraumatic stress disorder). Watch the intro This is AboutFace In these videos, Veterans, family members, and clinicians share their experiences with PTSD ...
Full Text Available Skip to Content Menu Closed (Tap to Open) Home Videos by Topic Videos by Type Search All ... What is AboutFace? Resources for Professionals Get Help Home Watch Videos by Topic Videos by Type Search ...
Full Text Available Skip to Content Menu Closed (Tap to Open) Home Interviews Our Stories Search All Videos PTSD Basics ... What is AboutFace? Resources for Professionals Get Help Home Watch Interviews Our Stories Search All Videos Learn ...
Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi
2010-01-01
The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
Trauth, N.; Schmidt, C.; Munz, M.
2016-12-01
Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.
International Nuclear Information System (INIS)
Marrel, A.
2008-01-01
In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr
Bayliss, A.
1978-01-01
The scattering of the sound of a jet engine by an airplane fuselage is modeled by solving the axially symmetric Helmholtz equation exterior to a long thin ellipsoid. The integral equation method based on the single layer potential formulation is used. A family of coordinate systems on the body is introduced and an algorithm is presented to determine the optimal coordinate system. Numerical results verify that the optimal choice enables the solution to be computed with a grid that is coarse relative to the wavelength.
Murga, Alicia; Sano, Yusuke; Kawamoto, Yoichi; Ito, Kazuhide
2017-10-01
Mechanical and passive ventilation strategies directly impact indoor air quality. Passive ventilation has recently become widespread owing to its ability to reduce energy demand in buildings, such as the case of natural or cross ventilation. To understand the effect of natural ventilation on indoor environmental quality, outdoor-indoor flow paths need to be analyzed as functions of urban atmospheric conditions, topology of the built environment, and indoor conditions. Wind-driven natural ventilation (e.g., cross ventilation) can be calculated through the wind pressure coefficient distributions of outdoor wall surfaces and openings of a building, allowing the study of indoor air parameters and airborne contaminant concentrations. Variations in outside parameters will directly impact indoor air quality and residents' health. Numerical modeling can contribute to comprehend these various parameters because it allows full control of boundary conditions and sampling points. In this study, numerical weather prediction modeling was used to calculate wind profiles/distributions at the atmospheric scale, and computational fluid dynamics was used to model detailed urban and indoor flows, which were then integrated into a dynamic downscaling analysis to predict specific urban wind parameters from the atmospheric to built-environment scale. Wind velocity and contaminant concentration distributions inside a factory building were analyzed to assess the quality of the human working environment by using a computer simulated person. The impact of cross ventilation flows and its variations on local average contaminant concentration around a factory worker, and inhaled contaminant dose, were then discussed.
van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro
2017-08-01
This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.
International Nuclear Information System (INIS)
Sarma, J.; Robson, P.N.
1979-01-01
The two dimensional transmission line matrix (TLM) numerical method has been adapted to compute electromagnetic field distributions in cylindrical co-ordinates and it is applied to evaluate the radiation loss from a charge bunch passing through a 'pill-box' resonator. The computer program has been developed to calculate not only the total energy loss to the resonator but also that component of it which exists in the TM 010 mode. The numerically computed results are shown to agree very well with the analytically derived values as found in the literature which, therefore, established the degree of accuracy that is obtained with the TLM method. The particular features of computational simplicity, numerical stability and the inherently time-domain solutions produced by the TLM method are cited as additional, attractive reasons for using this numerical procedure in solving such problems. (Auth.)
Full Text Available ... at first. But if it's been months or years since the trauma and you're not feeling better, you may have PTSD (posttraumatic stress disorder). Watch the intro This is AboutFace In these videos, Veterans, family members, ...
Full Text Available ... What is AboutFace? Resources for Professionals Get Help PTSD We've been there. After a traumatic event — ... you're not feeling better, you may have PTSD (posttraumatic stress disorder). Watch the intro This is ...
Directory of Open Access Journals (Sweden)
Suheel Abdullah Malik
Full Text Available In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE through substitution is converted into a nonlinear ordinary differential equation (NODE. The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM, homotopy perturbation method (HPM, and optimal homotopy asymptotic method (OHAM, show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Energy conservation using face detection
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
International Nuclear Information System (INIS)
Press, D.E.; Halliday, S.M.; Gale, J.E.
1990-12-01
Existing CAD/CADD systems have been reviewed and the micro-computer compatible solids modelling CADD software SilverScreen was selected for use in constructing a CADD model of the Stripa site. Maps of the Stripa mine drifts, shafts, raises and stopes were digitized and used to create three-dimensional images of the north-eastern part of the mine and the SCV site. In addition, the use of CADD sub-programs to display variation in fracture geometry and hydraulic heads have been demonstrated. The database developed in this study is available as either raw digitized files, processed data files, SilverScreen script files or in DXF or IGES formats; all of which are described in this report. (au)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
Directory of Open Access Journals (Sweden)
Najme Meimani
2017-09-01
Full Text Available In the application of photoacoustic human infant brain imaging, debubbled ultrasound gel or water is commonly used as a couplant for ultrasonic transducers due to their acoustic properties. The main challenge in using such a couplant is its discomfort for the patient. In this study, we explore the feasibility of a semi-dry coupling configuration to be used in photoacoustic computed tomography (PACT systems. The coupling system includes an inflatable container consisting of a thin layer of Aqualene with ultrasound gel or water inside of it. Finite element method (FEM is used for static and dynamic structural analysis of the proposed configuration to be used in PACT for infant brain imaging. The outcome of the analysis is an optimum thickness of Aqualene in order to meet the weight tolerance requirement with the least attenuation and best impedance match to recommend for an experimental setting.
International Nuclear Information System (INIS)
Batou, A.; Soize, C.; Brie, N.
2013-01-01
Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading
Energy Technology Data Exchange (ETDEWEB)
Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)
2013-09-15
Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.
Directory of Open Access Journals (Sweden)
Elder M. Mendoza Orbegoso
2017-06-01
Full Text Available Mango is one of the most popular and best paid tropical fruits in worldwide markets, its exportation is regulated within a phytosanitary quality control for killing the “fruit fly”. Thus, mangoes must be subject to hot-water treatment process that involves their immersion in hot water over a period of time. In this work, field measurements, analytical and simulation studies are developed on available hot-water treatment equipment called “Original” that only complies with United States phytosanitary protocols. These approaches are made to characterize the fluid-dynamic and thermal behaviours that occur during the mangoes’ hot-water treatment process. Then, analytical model and Computational fluid dynamics simulations are developed for designing new hot-water treatment equipment called “Hybrid” that simultaneously meets with both United States and Japan phytosanitary certifications. Comparisons of analytical results with data field measurements demonstrate that “Hybrid” equipment offers a better fluid-dynamic and thermal performance than “Original” ones.
Yu, Jen-Shiang K; Yu, Chin-Hui
2002-01-01
One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.
Similarity measures for face recognition
Vezzetti, Enrico
2015-01-01
Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.
International Nuclear Information System (INIS)
Marinescu, D.C.; Radulescu, T.G.
1977-06-01
The Integral Fourier Transform has a large range of applications in such areas as communication theory, circuit theory, physics, etc. In order to perform discrete Fourier Transform the Finite Fourier Transform is defined; it operates upon N samples of a uniformely sampled continuous function. All the properties known in the continuous case can be found in the discrete case also. The first part of the paper presents the relationship between the Finite Fourier Transform and the Integral one. The computing of a Finite Fourier Transform is a problem in itself since in order to transform a set of N data we have to perform N 2 ''operations'' if the transformation relations are used directly. An algorithm known as the Fast Fourier Transform (FFT) reduces this figure from N 2 to a more reasonable Nlog 2 N, when N is a power of two. The original Cooley and Tuckey algorithm for FFT can be further improved when higher basis are used. The price to be paid in this case is the increase in complexity of such algorithms. The recurrence relations and a comparation among such algorithms are presented. The key point in understanding the application of FFT resides in the convolution theorem which states that the convolution (an N 2 type procedure) of the primitive functions is equivalent to the ordinar multiplication of their transforms. Since filtering is actually a convolution process we present several procedures to perform digital filtering by means of FFT. The best is the one using the segmentation of records and the transformation of pairs of records. In the digital processing of signals, besides digital filtering a special attention is paid to the estimation of various statistical characteristics of a signal as: autocorrelation and correlation functions, periodiograms, density power sepctrum, etc. We give several algorithms for the consistent and unbiased estimation of such functions, by means of FFT. (author)
Directory of Open Access Journals (Sweden)
Ion Stiharu
2010-08-01
Full Text Available Computer numerically controlled (CNC machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA-based sensor node.
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that i...
Litvin, Faydor L.; Fuentes, Alfonso; Hawkins, J. M.; Handschuh, Robert F.
2001-01-01
A new type of face gear drive for application in transmissions, particularly in helicopters, has been developed. The new geometry differs from the existing geometry by application of asymmetric profiles and double-crowned pinion of the face gear mesh. The paper describes the computerized design, simulation of meshing and contact, and stress analysis by finite element method. Special purpose computer codes have been developed to conduct the analysis. The analysis of this new type of face gear is illustrated with a numerical example.
Multithread Face Recognition in Cloud
Directory of Open Access Journals (Sweden)
Dakshina Ranjan Kisku
2016-01-01
Full Text Available Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. The proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods.
Energy Technology Data Exchange (ETDEWEB)
Nordrik, R.
1993-12-01
The processes in the combustion chamber of internal combustion engines have received increased attention in recent years because their efficiencies are important both economically and environmentally. This doctoral thesis studies the ignition phenomena by means of numerical simulation methods. The fundamental physical relations include flow field conservation equations, thermodynamics, chemical reaction kinetics, transport properties and spark modelling. Special attention is given to the inclusion of chemical kinetics in the flow field equations. Using his No Transport of Radicals Concept method, the author reduces the computational efforts by neglecting the transport of selected intermediate species. The method is validated by comparison with flame propagation data. A computational method is described and used to simulate spark ignition in laminar premixed methane-air mixtures and the autoignition process of a methane bubble surrounded by hot air. The spark ignition simulation agrees well with experimental results from the literature. The autoignition simulation identifies the importance of diffusive and chemical processes acting together. The ignition delay times exceed the experimental values found in the literature for premixed ignition delay, presumably because of the mixing process and lack of information on low temperature reactions in the skeletal kinetic mechanism. Transient turbulent methane jet autoignition is simulated by means of the KIVA-II code. Turbulent combustion is modelled by the Eddy Dissipation Concept. 90 refs., 81 figs., 3 tabs.
Ruiz-Blanco, Yasser B; Paz, Waldo; Green, James; Marrero-Ponce, Yovani
2015-05-16
The exponential growth of protein structural and sequence databases is enabling multifaceted approaches to understanding the long sought sequence-structure-function relationship. Advances in computation now make it possible to apply well-established data mining and pattern recognition techniques to these data to learn models that effectively relate structure and function. However, extracting meaningful numerical descriptors of protein sequence and structure is a key issue that requires an efficient and widely available solution. We here introduce ProtDCal, a new computational software suite capable of generating tens of thousands of features considering both sequence-based and 3D-structural descriptors. We demonstrate, by means of principle component analysis and Shannon entropy tests, how ProtDCal's sequence-based descriptors provide new and more relevant information not encoded by currently available servers for sequence-based protein feature generation. The wide diversity of the 3D-structure-based features generated by ProtDCal is shown to provide additional complementary information and effectively completes its general protein encoding capability. As demonstration of the utility of ProtDCal's features, prediction models of N-linked glycosylation sites are trained and evaluated. Classification performance compares favourably with that of contemporary predictors of N-linked glycosylation sites, in spite of not using domain-specific features as input information. ProtDCal provides a friendly and cross-platform graphical user interface, developed in the Java programming language and is freely available at: http://bioinf.sce.carleton.ca/ProtDCal/ . ProtDCal introduces local and group-based encoding which enhances the diversity of the information captured by the computed features. Furthermore, we have shown that adding structure-based descriptors contributes non-redundant additional information to the features-based characterization of polypeptide systems. This
Murphy, Ryan J; Basafa, Ehsan; Hashemi, Sepehr; Grant, Gerald T; Liacouras, Peter; Susarla, Srinivas M; Otake, Yoshito; Santiago, Gabriel; Armand, Mehran; Gordon, Chad R
2015-08-01
The aesthetic and functional outcomes surrounding Le Fort-based, face-jaw-teeth transplantation have been suboptimal, often leading to posttransplant class II/III skeletal profiles, palatal defects, and "hybrid malocclusion." Therefore, a novel technology-real-time cephalometry-was developed to provide the surgical team instantaneous, intraoperative knowledge of three-dimensional dentoskeletal parameters. Mock face-jaw-teeth transplantation operations were performed on plastic and cadaveric human donor/recipient pairs (n = 2). Preoperatively, cephalometric landmarks were identified on donor/recipient skeletons using segmented computed tomographic scans. The computer-assisted planning and execution workstation tracked the position of the donor face-jaw-teeth segment in real time during the placement/inset onto recipient, reporting pertinent hybrid cephalometric parameters from any movement of donor tissue. The intraoperative data measured through real-time cephalometry were compared to posttransplant measurements for accuracy assessment. In addition, posttransplant cephalometric relationships were compared to planned outcomes to determine face-jaw-teeth transplantation success. Compared with postoperative data, the real-time cephalometry-calculated intraoperative measurement errors were 1.37 ± 1.11 mm and 0.45 ± 0.28 degrees for the plastic skull and 2.99 ± 2.24 mm and 2.63 ± 1.33 degrees for the human cadaver experiments. These results were comparable to the posttransplant relations to planned outcome (human cadaver experiment, 1.39 ± 1.81 mm and 2.18 ± 1.88 degrees; plastic skull experiment, 1.06 ± 0.63 mm and 0.53 ± 0.39 degrees). Based on this preliminary testing, real-time cephalometry may be a valuable adjunct for adjusting and measuring "hybrid occlusion" in face-jaw-teeth transplantation and other orthognathic surgical procedures.
Oliveira-Santos, Thiago; Baumberger, Christian; Constantinescu, Mihai; Olariu, Radu; Nolte, Lutz-Peter; Alaraibi, Salman; Reyes, Mauricio
2013-05-01
The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient's wishes and to achieve the desired results. To date, most plastic surgeons rely on either "free hand" 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient's face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.
Mastorakis, Nikos E
2009-01-01
Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.
Human Face as human single identity
Warnars, Spits
2014-01-01
Human face as a physical human recognition can be used as a unique identity for computer to recognize human by transforming human face with face algorithm as simple text number which can be primary key for human. Human face as single identity for human will be done by making a huge and large world centre human face database, where the human face around the world will be recorded from time to time and from generation to generation. Architecture database will be divided become human face image ...
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
Numerical analysis of bifurcations
International Nuclear Information System (INIS)
Guckenheimer, J.
1996-01-01
This paper is a brief survey of numerical methods for computing bifurcations of generic families of dynamical systems. Emphasis is placed upon algorithms that reflect the structure of the underlying mathematical theory while retaining numerical efficiency. Significant improvements in the computational analysis of dynamical systems are to be expected from more reliance of geometric insight coming from dynamical systems theory. copyright 1996 American Institute of Physics
Alternative face models for 3D face registration
Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale
2007-01-01
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
Numerically- analysed Multiwavelet Transform computations ...
African Journals Online (AJOL)
pc
2018-03-05
Mar 5, 2018 ... The multiwavelet has two-scale : Research ... two wavelets functions beginning from two box functions shown in .... compression for multi-view auto-stereoscopic displays,” in ... [10] T. Kesavamurthy and S. Rani, “Dicom Color.
Numerical Computation of Detonation Stability
Kabanov, Dmitry
2018-01-01
Then we investigate the Fickett’s detonation analogue coupled with a particular reaction-rate expression. In addition to the linear stability analysis of this model, we demonstrate that it exhibits rich nonlinear dynamics with multiple bifurcations and chaotic behavior.
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
Numerical methods for differential equations and applications
International Nuclear Information System (INIS)
Ixaru, L.G.
1984-01-01
This book is addressed to persons who, without being professionals in applied mathematics, are often faced with the problem of numerically solving differential equations. In each of the first three chapters a definite class of methods is discussed for the solution of the initial value problem for ordinary differential equations: multistep methods; one-step methods; and piecewise perturbation methods. The fourth chapter is mainly focussed on the boundary value problems for linear second-order equations, with a section devoted to the Schroedinger equation. In the fifth chapter the eigenvalue problem for the radial Schroedinger equation is solved in several ways, with computer programs included. (Auth.)
International Nuclear Information System (INIS)
Benkenida, Adlene
1999-01-01
This work is devoted to the development and the use of a numerical code aimed to compute complex two-phase flows in which the topology of the interfaces evolves in time. The solution strategy makes use of a fixed grid on which interfaces evolve freely. The governing equations of the model (one-fluid model) are obtained by adding the local, instantaneous Navier-Stokes equations of each phase after a spatial filtering. The use of an Eulerian approach yields difficulties in estimating several of the two-phase quantities, especially the viscous stress tensor. This problem is overcome by deriving and validating an expression of the stress tensor valid for any Eulerian treatment and whatever the orientation of the interfaces with respect to the grid. To simplify the governing equations of the model, it is assumed that no phase change occurs, that no local slip exists between both phases, and that no small-scale turbulence is present. The possibility to remove some of these hypotheses is discussed, especially with the future aim of developing a large-eddy simulation approach of two-phase flows in which the motion and the effects of small-scale two-phase structures could be taken into account. Interface transport is performed by using a FCT front capturing method without any interface reconstruction procedure. It is shown through several tests that the version of Zalesak's (1979) algorithm in which each direction is treated independently yields the best results, even though a tendency for interfacial regions to thicken artificially is observed in regions with high stretching rates.The code is validated by performing simulations on some simple two-phase flows and by comparing numerical results with available analytical solutions, experiments, or previous computations. Among the results of these tests, those concerning the bouncing of a bubble on a rigid wall are the most original and shed new light on this phenomenon, especially by revealing the time evolution of the
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1990-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
International Nuclear Information System (INIS)
Piran, T.
1982-01-01
There are many recent developments in numerical relativity, but there remain important unsolved theoretical and practical problems. The author reviews existing numerical approaches to solution of the exact Einstein equations. A framework for classification and comparison of different numerical schemes is presented. Recent numerical codes are compared using this framework. The discussion focuses on new developments and on currently open questions, excluding a review of numerical techniques. (Auth.)
Kabilan, Senthil; Jung, Hun Bok; Kuprat, Andrew P; Beck, Anthon N; Varga, Tamas; Fernandez, Carlos A; Um, Wooyong
2016-06-21
X-ray microtomography (XMT) imaging combined with three-dimensional (3D) computational fluid dynamics (CFD) modeling technique was used to study the effect of geochemical and geomechanical processes on fracture permeability in composite Portland cement-basalt caprock core samples. The effect of fluid density and viscosity and two different pressure gradient conditions on fracture permeability was numerically studied by using fluids with varying density and viscosity and simulating two different pressure gradient conditions. After the application of geomechanical stress but before CO2-reaction, CFD revealed fluid flow increase, which resulted in increased fracture permeability. After CO2-reaction, XMT images displayed preferential precipitation of calcium carbonate within the fractures in the cement matrix and less precipitation in fractures located at the cement-basalt interface. CFD estimated changes in flow profile and differences in absolute values of flow velocity due to different pressure gradients. CFD was able to highlight the profound effect of fluid viscosity on velocity profile and fracture permeability. This study demonstrates the applicability of XMT imaging and CFD as powerful tools for characterizing the hydraulic properties of fractures in a number of applications like geologic carbon sequestration and storage, hydraulic fracturing for shale gas production, and enhanced geothermal systems.
Kong, Xiangxue; Tang, Lei; Ye, Qiang; Huang, Wenhua; Li, Jianyi
2017-11-01
Accurate and safe posterior thoracic pedicle insertion (PTPI) remains a challenge. Patient-specific drill templates (PDTs) created by rapid prototyping (RP) can assist in posterior thoracic pedicle insertion, but pose biocompatibility risks. The aims of this study were to develop alternative PDTs with computer numerical control (CNC) and assess their feasibility and accuracy in assisting PTPI. Preoperative CT images of 31 cadaveric thoracic vertebras were obtained and then the optimal pedicle screw trajectories were planned. The PDTs with optimal screw trajectories were randomly assigned to be designed and manufactured by CNC or RP in each vertebra. With the guide of the CNC- or RP-manufactured PDTs, the appropriate screws were inserted into the pedicles. Postoperative CT scans were performed to analyze any deviations at entry point and midpoint of the pedicles. The CNC group was found to be significant manufacture-time-shortening, and cost-decreasing, when compared with the RP group (P 0.05). The screw positions were grade 0 in 90.3% and grade 1 in 9.7% of the cases in the CNC group and grade 0 in 93.5% and grade 1 in 6.5% of the cases in the RP group (P = 0.641). CNC-manufactured PDTs are viable for assisting in PTPI with good feasibility and accuracy.
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
The Functional Neuroanatomy of Human Face Perception.
Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse
2017-09-15
Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.
Virtual & Real Face to Face Teaching
Teneqexhi, Romeo; Kuneshka, Loreta
2016-01-01
In traditional "face to face" lessons, during the time the teacher writes on a black or white board, the students are always behind the teacher. Sometimes, this happens even in the recorded lesson in videos. Most of the time during the lesson, the teacher shows to the students his back not his face. We do not think the term "face to…
A robust human face detection algorithm
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
Gasparini, N. M.; Hobley, D. E. J.; Tucker, G. E.; Istanbulluoglu, E.; Adams, J. M.; Nudurupati, S. S.; Hutton, E. W. H.
2014-12-01
Computational models are important tools that can be used to quantitatively understand the evolution of real landscapes. Commonalities exist among most landscape evolution models, although they are also idiosyncratic, in that they are coded in different languages, require different input values, and are designed to tackle a unique set of questions. These differences can make applying a landscape evolution model challenging, especially for novice programmers. In this study, we compare and contrast two landscape evolution models that are designed to tackle similar questions, but the actual model designs are quite different. The first model, CHILD, is over a decade-old and is relatively well-tested, well-developed and well-used. It is coded in C++, operates on an irregular grid and was designed more with function rather than user-experience in mind. In contrast, the second model, Landlab, is relatively new and was designed to be accessible to a wide range of scientists, including those who have not previously used or developed a numerical model. Landlab is coded in Python, a relatively easy language for the non-proficient programmer, and has the ability to model landscapes described on both regular and irregular grids. We present landscape simulations from both modeling platforms. Our goal is to illustrate best practices for implementing a new process module in a landscape evolution model, and therefore the simulations are applicable regardless of the modeling platform. We contrast differences and highlight similarities between the use of the two models, including setting-up the model and input file for different evolutionary scenarios, computational time, and model output. Whenever possible, we compare model output with analytical solutions and illustrate the effects, or lack thereof, of a uniform vs. non-uniform grid. Our simulations focus on implementing a single process, including detachment-limited or transport-limited fluvial bedrock incision and linear or non
Shibata, Masaru
2016-01-01
This book is composed of two parts: First part describes basics in numerical relativity, that is, the formulations and methods for a solution of Einstein's equation and general relativistic matter field equations. This part will be helpful for beginners of numerical relativity who would like to understand the content of numerical relativity and its background. The second part focuses on the application of numerical relativity. A wide variety of scientific numerical results are introduced focusing in particular on the merger of binary neutron stars and black holes.
Directory of Open Access Journals (Sweden)
Brown, Andrew
2014-08-01
Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
European cinema: face to face with Hollywood
Elsaesser, T.
2005-01-01
In the face of renewed competition from Hollywood since the early 1980s and the challenges posed to Europe's national cinemas by the fall of the Wall in 1989, independent filmmaking in Europe has begun to re-invent itself. European Cinema: Face to Face with Hollywood re-assesses the different
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Velocity field calculation for non-orthogonal numerical grids
Energy Technology Data Exchange (ETDEWEB)
Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-03-01
Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation, and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non
Face Recognition and Tracking in Videos
Directory of Open Access Journals (Sweden)
Swapnil Vitthal Tathe
2017-07-01
Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
Bright, William
In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…
Rao, G Shanker
2006-01-01
About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...
National Research Council Canada - National Science Library
Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto
2000-01-01
... for their qualitative and quantitative analysis. This role is also emphasized by the continual development of computers and algorithms, which make it possible nowadays, using scientiﬁc computing, to tackle problems of such a large size that real-life phenomena can be simulated providing accurate responses at aﬀordable computational cost. The corresp...
Handbook of numerical analysis
Ciarlet, Philippe G
Mathematical finance is a prolific scientific domain in which there exists a particular characteristic of developing both advanced theories and practical techniques simultaneously. Mathematical Modelling and Numerical Methods in Finance addresses the three most important aspects in the field: mathematical models, computational methods, and applications, and provides a solid overview of major new ideas and results in the three domains. Coverage of all aspects of quantitative finance including models, computational methods and applications Provides an overview of new ideas an
Chung, C-W.; Lee, C-C.; Liu, C-C.
2013-01-01
Mobile computers are now increasingly applied to facilitate face-to-face collaborative learning. However, the factors affecting face-to-face peer interactions are complex as they involve rich communication media. In particular, non-verbal interactions are necessary to convey critical communication messages in face-to-face communication. Through…
Lift-and-fill face lift: integrating the fat compartments.
Rohrich, Rod J; Ghavami, Ashkan; Constantine, Fadi C; Unger, Jacob; Mojallal, Ali
2014-06-01
Recent discovery of the numerous fat compartments of the face has improved our ability to more precisely restore facial volume while rejuvenating it through differential superficial musculoaponeurotic system treatment. Incorporation of selective fat compartment volume restoration along with superficial musculoaponeurotic system manipulation allows for improved control in recontouring while addressing one of the key problems in facial aging, namely, volume deflation. This theory was evaluated by assessing the contour changes from simultaneous face "lifting" and "filling" through fat compartment-guided facial fat transfer. A review of 100 face-lift patients was performed. All patients had an individualized component face lift with fat grafting to the nasolabial fold, deep malar, and high/lateral malar fat compartment locations. Photographic analysis using a computer program was conducted on oblique facial views preoperatively and postoperatively, to obtain the most projected malar contour point. Two independent observers visually evaluated the malar prominence and nasolabial fold improvements based on standardized photographs. Nasolabial fold improved by at least one grade in 81 percent and by over one grade in 11 percent. Malar prominence average projection increase was 13.47 percent and the average amount of lift was 12.24 percent. The malar prominence score improved by at least one grade in 62 percent of the patients postoperatively, and 9 percent had a greater than one grade improvement. Twenty-eight percent of the patients had a convex malar prominence postoperatively compared with 6 percent preoperatively. Malar prominence improved by at least one grade in 63 percent and by over one grade in 10 percent. The lift-and-fill face lift merges two key concepts in facial rejuvenation: (1) effective tissue manipulation by means of lifting and tightening in differential vectors according to original facial asymmetry and shape; and (2) selective fat compartment filling
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
A survey of real face modeling methods
Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng
2017-09-01
The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.
Langton, Stephen R. H.; Law, Anna S.; Burton, A. Mike; Schweinberger, Stefan R.
2008-01-01
We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array.…
Introduction to precise numerical methods
Aberth, Oliver
2007-01-01
Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.
The hierarchical brain network for face recognition.
Zhen, Zonglei; Fang, Huizhen; Liu, Jia
2013-01-01
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.
Numerical semigroups and applications
Assi, Abdallah
2016-01-01
This work presents applications of numerical semigroups in Algebraic Geometry, Number Theory, and Coding Theory. Background on numerical semigroups is presented in the first two chapters, which introduce basic notation and fundamental concepts and irreducible numerical semigroups. The focus is in particular on free semigroups, which are irreducible; semigroups associated with planar curves are of this kind. The authors also introduce semigroups associated with irreducible meromorphic series, and show how these are used in order to present the properties of planar curves. Invariants of non-unique factorizations for numerical semigroups are also studied. These invariants are computationally accessible in this setting, and thus this monograph can be used as an introduction to Factorization Theory. Since factorizations and divisibility are strongly connected, the authors show some applications to AG Codes in the final section. The book will be of value for undergraduate students (especially those at a higher leve...
Mareels, Guy; Poyck, Paul P. C.; Eloot, Sunny; Chamuleau, Robert A. F. M.; Verdonck, Pascal R.
2006-01-01
A numerical model to investigate fluid flow and oxygen (O(2)) transport and consumption in the AMC-Bioartificial Liver (AMC-BAL) was developed and applied to two representative micro models of the AMC-BAL with two different gas capillary patterns, each combined with two proposed hepatocyte
Energy Technology Data Exchange (ETDEWEB)
Pinto, Joao Pedro C.T.A.; Santos, Andre A. Campagnole dos; Mesquita, Amir Z., E-mail: jpctap@cdtn.br, E-mail: aacs@cdtn.br, E-mail: amir@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG),Belo Horizonte, MG (Brazil). Lab. de Termo-Hidraulica
2013-07-01
This work consists to evaluate and continue the study that is being developed in the Laboratory of Thermo-Hydraulics of the CNEN/CDTN (Centro de Desenvolvimento da Tecnologia Nuclear), aiming to validate the methods and procedures used in the numerical calculations of fluid flow in fuel elements of the core of the VHTR.
Directory of Open Access Journals (Sweden)
Yoshi-Taka Matsuda
2016-08-01
Full Text Available Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1 familiar faces, (2 novel faces and (3 intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity.
National Aeronautics and Space Administration — The design and qualification of entry systems for planetary exploration largely rely on computational simulations. However, state-of-the-art modeling capabilities...
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
International Nuclear Information System (INIS)
Aoufi, A.; Damamme, G.
2011-01-01
The aim of this work is to study by numerical simulation a mathematical modelling technique describing charge trapping during initial charge injection in an insulator submitted to electron beam irradiation. A two-fluxes method described by a set of two stationary transport equations is used to split the electron current j e (z) into coupled forward j e+ (z) and backward j e (z) currents and such that j e (z) = j e+ (z) - j e- (z). The sparse algebraic linear system, resulting from the vertex-centered finite-volume discretization scheme is solved by an iterative decoupled fixed point method which involves the direct inversion of a bi-diagonal matrix. The sensitivity of the initial secondary electron emission yield with respect to the energy of incident primary electrons beam, that is penetration depth of the incident beam, or electron cross sections (absorption and diffusion) is investigated by numerical simulations. (authors)
Amberg, Brian
2011-01-01
Editing faces in movies is of interest in the special effects industry. We aim at producing effects such as the addition of accessories interacting correctly with the face or replacing the face of a stuntman with the face of the main actor. The system introduced in this thesis is based on a 3D generative face model. Using a 3D model makes it possible to edit the face in the semantic space of pose, expression, and identity instead of pixel space, and due to its 3D nature allows...
Dahlquist, Germund
1974-01-01
""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.
International Nuclear Information System (INIS)
Uchibori, Akihiro; Ohshima, Hiroyuki
2004-04-01
Survey research of numerical methods for melting/solidification and dissolution/precipitation phenomena was performed to determine the policy for a simulation program development. Melting/solidification and dissolution/ precipitation have been key issues for feasibility evaluation of several techniques applied in the nuclear fuel cycle processes. Physical models for single-component melting/solidification, two-component solution solidification or precipitation by cooling and precipitation by electrolysis, which are moving boundary problems, were made clear from the literature survey. The transport equations are used for thermal hydraulic analysis in the solid and the liquid regions. Behavior of the solid-liquid interface is described by the heat and mass transfer model. These physical models need to be introduced into the simulation program. The numerical methods for the moving boundary problems are categorized into two types: interface tracking method and interface capturing method. Based on the classification, performance of each numerical method was evaluated. The interface tracking method using the Lagrangian moving mesh requires relatively complicated algorithm. The algorithm has high accuracy for predicting the moving interface. On the other hand, the interface capturing method uses the Eulerian fixing mesh, leading to simple algorithm. Prediction accuracy of the method is relatively low. The extended finite element method classified as the interface capturing method can predict the interface behavior accurately even though the Eulerian fixing mesh is used. We decided to apply the extended finite element method to the simulation program. (author)
On numerical Bessel transformation
International Nuclear Information System (INIS)
Sommer, B.; Zabolitzky, J.G.
1979-01-01
The authors present a computer program to calculate the three dimensional Fourier or Bessel transforms and definite integrals with Bessel functions. Numerical integration of systems containing Bessel functions occurs in many physical problems, e.g. electromagnetic form factor of nuclei, all transitions involving multipole expansions at high momenta. Filon's integration rule is extended to spherical Bessel functions. The numerical error is of the order of the Simpson error term of the function which has to be transformed. Thus one gets a stable integral even at large arguments of the transformed function. (Auth.)
Convolutional neural networks and face recognition task
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
3D face modeling, analysis and recognition
Daoudi, Mohamed; Veltkamp, Remco
2013-01-01
3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s
Transient well flow in layered aquifer systems: the uniform well-face drawdown solution
Hemker, C. J.
1999-11-01
Previously a hybrid analytical-numerical solution for the general problem of computing transient well flow in vertically heterogeneous aquifers was proposed by the author. The radial component of flow was treated analytically, while the finite-difference technique was used for the vertical flow component only. In the present work the hybrid solution has been modified by replacing the previously assumed uniform well-face gradient (UWG) boundary condition in such a way that the drawdown remains uniform along the well screen. The resulting uniform well-face drawdown (UWD) solution also includes the effects of a finite diameter well, wellbore storage and a thin skin, while partial penetration and vertical heterogeneity are accommodated by the one-dimensional discretization. Solutions are proposed for well flow caused by constant, variable and slug discharges. The model was verified by comparing wellbore drawdowns and well-face flux distributions with published numerical solutions. Differences between UWG and UWD well flow will occur in all situations with vertical flow components near the well, which is demonstrated by considering: (1) partially penetrating wells in confined aquifers, (2) fully penetrating wells in unconfined aquifers with delayed response and (3) layered aquifers and leaky multiaquifer systems. The presented solution can be a powerful tool for solving many well-hydraulic problems, including well tests, flowmeter tests, slug tests and pumping tests. A computer program for the analysis of pumping tests, based on the hybrid analytical-numerical technique and UWG or UWD conditions, is available from the author.
Series 'Facing Radiation'. 2 Facing radiation is facing residents
International Nuclear Information System (INIS)
Hanzawa, Takahiro
2013-01-01
The series is to report how general people, who are not at all radiological experts, have faced and understood the problems and tasks of radiation given by the Fukushima Daiichi Nuclear Power Plant Accident (Mar. 2011). The section 2 is reported by an officer of Date City, which localizes at 60 km northern west of the Plant, borders on Iitate Village of Fukushima prefecture, and is indicated as the important area of contamination search (IACS), which the reporter has been conducted for as responsible personnel. In July 2011, the ambient dose was as high as 3.0-3.5 mc-Sv/h and the tentative storage place of contaminated materials was decided by own initiative of residents of a small community, from which the real decontamination started in the City. The target dose after decontamination was defined to be 1.0 mc-Sv/h: however, 28/32 IACS municipalities in the prefecture had not defined the target although they had worked for 2 years after the Accident for their areas exceeding the standard 0.23 mc-Sv/h. At the moment of decontamination of the reporter's own house, he noticed that resident's concerns had directed toward its work itself, not toward the target dose, and wondered if these figures had obstructed to correctly face the radiation. At present that about 2.5 years have passed since the Accident, all of Date citizens have personal accumulated glass dosimeters for seeing the effective external dose and it seems that their dose will not exceed 1 mSv/y if the ambient dose estimated is 0.3-5 mc-Sv/h. Media run to popularity not to face radiation, experts tend to hesitate to face media and residents, and radiation dose will be hardly reduced to zero, despite that correct understanding of radiation is a shorter way for residents' own ease: facing radiation is facing residents. (T.T.)