Method for numerical simulation of two-term exponentially correlated colored noise
International Nuclear Information System (INIS)
Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.
2006-01-01
A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications
Population models and simulation methods: The case of the Spearman rank correlation.
Astivia, Oscar L Olvera; Zumbo, Bruno D
2017-11-01
The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.
Lee, Tsung-Han
Strongly correlated materials are a class of materials that cannot be properly described by the Density Functional Theory (DFT), which is a single-particle approximation to the original many-body electronic Hamiltonian. These systems contain d or f orbital electrons, i.e., transition metals, actinides, and lanthanides compounds, for which the electron-electron interaction (correlation) effects are too strong to be described by the single-particle approximation of DFT. Therefore, complementary many-body methods have been developed, at the model Hamiltonians level, to describe these strong correlation effects. Dynamical Mean Field Theory (DMFT) and Rotationally Invariant Slave-Boson (RISB) approaches are two successful methods that can capture the correlation effects for a broad interaction strength. However, these many-body methods, as applied to model Hamiltonians, treat the electronic structure of realistic materials in a phenomenological fashion, which only allow to describe their properties qualitatively. Consequently, the combination of DFT and many body methods, e.g., Local Density Approximation augmented by RISB and DMFT (LDA+RISB and LDA+DMFT), have been recently proposed to combine the advantages of both methods into a quantitative tool to analyze strongly correlated systems. In this dissertation, we studied the possible improvements of these approaches, and tested their accuracy on realistic materials. This dissertation is separated into two parts. In the first part, we studied the extension of DMFT and RISB in three directions. First, we extended DMFT framework to investigate the behavior of the domain wall structure in metal-Mott insulator coexistence regime by studying the unstable solution describing the domain wall. We found that this solution, differing qualitatively from both the metallic and the insulating solutions, displays an insulating-like behavior in resistivity while carrying a weak metallic character in its electronic structure. Second, we
International Nuclear Information System (INIS)
Ferguson, A.J.
1974-01-01
An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)
Methods of channeling simulation
International Nuclear Information System (INIS)
Barrett, J.H.
1989-06-01
Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs
GEM simulation methods development
International Nuclear Information System (INIS)
Tikhonov, V.; Veenhof, R.
2002-01-01
A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment
Energy Technology Data Exchange (ETDEWEB)
Wu, Xiaokun; Han, Min; Ming, Dengming, E-mail: dming@fudan.edu.cn [Department of Physiology and Biophysics, School of Life Sciences, Fudan University, Shanghai (China)
2015-10-07
Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.
Image correlation method for DNA sequence alignment.
Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván
2012-01-01
The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.
Simulation of speckle patterns with pre-defined correlation distributions
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.
2016-01-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589
Strongly Correlated Systems Theoretical Methods
Avella, Adolfo
2012-01-01
The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...
Strongly correlated systems numerical methods
Mancini, Ferdinando
2013-01-01
This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...
Two-dimensional Simulations of Correlation Reflectometry in Fusion Plasmas
International Nuclear Information System (INIS)
Valeo, E.J.; Kramer, G.J.; Nazikian, R.
2001-01-01
A two-dimensional wave propagation code, developed specifically to simulate correlation reflectometry in large-scale fusion plasmas is described. The code makes use of separate computational methods in the vacuum, underdense and reflection regions of the plasma in order to obtain the high computational efficiency necessary for correlation analysis. Simulations of Tokamak Fusion Test Reactor (TFTR) plasma with internal transport barriers are presented and compared with one-dimensional full-wave simulations. It is shown that the two-dimensional simulations are remarkably similar to the results of the one-dimensional full-wave analysis for a wide range of turbulent correlation lengths. Implications for the interpretation of correlation reflectometer measurements in fusion plasma are discussed
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Magnetic Flyer Facility Correlation and UGT Simulation
1978-05-01
assistance in this program from the following: Southern Research Institute - Material properties and C. Pears and G. Fornaro damage data Air Force ...techniques - flyer plate loading. The program was divided into two majur parts, the Facility Correlation Study and the UGT Simulation STudy. For the...current produces a magnetic field which then produces an accelerating force on the flyer plate, itself a current carry- ing part of the circuit. The flyer
A unitary correlation operator method
International Nuclear Information System (INIS)
Feldmeier, H.; Neff, T.; Roth, R.; Schnack, J.
1997-09-01
The short range repulsion between nucleons is treated by a unitary correlation operator which shifts the nucleons away from each other whenever their uncorrelated positions are within the repulsive core. By formulating the correlation as a transformation of the relative distance between particle pairs, general analytic expressions for the correlated wave functions and correlated operators are given. The decomposition of correlated operators into irreducible n-body operators is discussed. The one- and two-body-irreducible parts are worked out explicitly and the contribution of three-body correlations is estimated to check convergence. Ground state energies of nuclei up to mass number A=48 are calculated with a spin-isospin-dependent potential and single Slater determinants as uncorrelated states. They show that the deduced energy-and mass-number-independent correlated two-body Hamiltonian reproduces all ''exact'' many-body calculations surprisingly well. (orig.)
Correlation methods in cutting arcs
Energy Technology Data Exchange (ETDEWEB)
Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)
2011-05-01
The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.
Correlation methods in cutting arcs
International Nuclear Information System (INIS)
Prevosto, L; Kelly, H
2011-01-01
The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Correlated prompt fission data in transport simulations
Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.
2018-01-01
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in
Correlated prompt fission data in transport simulations
Energy Technology Data Exchange (ETDEWEB)
Talou, P.; Jaffke, P.; Kawano, T.; Stetcu, I. [Los Alamos National Laboratory, Nuclear Physics Group, Theoretical Division, Los Alamos, NM (United States); Vogt, R. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); University of California, Physics Department, Davis, CA (United States); Randrup, J. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Rising, M.E.; Andrews, M.T.; Sood, A. [Los Alamos National Laboratory, Monte Carlo Methods, Codes, and Applications Group, Los Alamos, NM (United States); Pozzi, S.A.; Clarke, S.D.; Marcath, M.J. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, MI (United States); Verbeke, J.; Nakae, L. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); Jandel, M. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States); University of Massachusetts, Department of Physics and Applied Physics, Lowell, MA (United States); Meierbachtol, K. [Los Alamos National Laboratory, Nuclear Engineering and Nonproliferation, Los Alamos, NM (United States); Rusev, G.; Walker, C. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States)
2018-01-15
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n-n, n-γ, and γ-γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX-PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation
Hydrogen Epoch of Reinozation Array (HERA) Calibrated FFT Correlator Simulation
Salazar, Jeffrey David; Parsons, Aaron
2018-01-01
The Hydrogen Epoch of Reionization Array (HERA) project is an astronomical radio interferometer array with a redundant baseline configuration. Interferometer arrays are being used widely in radio astronomy because they have a variety of advantages over single antenna systems. For example, they produce images (visibilities) closely matching that of a large antenna (such as the Arecibo observatory), while both the hardware and maintenance costs are significantly lower. However, this method has some complications; one being the computational cost of correlating data from all of the antennas. A correlator is an electronic device that cross-correlates the data between the individual antennas; these are what radio astronomers call visibilities. HERA, being in its early stages, utilizes a traditional correlator system. The correlator cost scales as N2, where N is the number of antennas in the array. The purpose of a redundant baseline configuration array setup is for the use of a more efficient Fast Fourier Transform (FFT) correlator. FFT correlators scale as Nlog2N. The data acquired from this sort of setup, however, inherits geometric delay and uncalibrated antenna gains. This particular project simulates the process of calibrating signals from astronomical sources. Each signal “received” by an antenna in the simulation is given random antenna gain and geometric delay. The “linsolve” Python module was used to solve for the unknown variables in the simulation (complex gains and delays), which then gave a value for the true visibilities. This first version of the simulation only mimics a one dimensional redundant telescope array detecting a small amount of sources located in the volume above the antenna plane. Future versions, using GPUs, will handle a two dimensional redundant array of telescopes detecting a large amount of sources in the volume above the array.
ERP Correlates of Simulated Purchase Decisions.
Gajewski, Patrick D; Drizinsky, Jessica; Zülch, Joachim; Falkenstein, Michael
2016-01-01
Decision making in economic context is an everyday activity but its neuronal correlates are poorly understood. The present study aimed at investigating the electrophysiological brain activity during simulated purchase decisions of technical products for a lower or higher price relative to a mean price estimated in a pilot study. Expectedly, participants mostly decided to buy a product when it was cheap and not to buy when it was expensive. However, in some trials they made counter-conformity decisions to buy a product for a higher than the average price or not to buy it despite an attractive price. These responses took more time and the variability of the response latency was enhanced relative to conformity responses. ERPs showed enhanced conflict related fronto-central N2 during both types of counter-conformity compared to conformity decisions. A reverse pattern was found for the P3a and P3b. The response-locked P3 (r-P3) was larger and the subsequent CNV smaller for counter-conformity than conformity decisions. We assume that counter-conformity decisions elevate the response threshold (larger N2), intensify response evaluation (r-P3) and attenuate the preparation for the next trial (CNV). These effects were discussed in the framework of the functional role of the fronto-parietal cortex in economic decision making.
ERP correlates of simulated purchase decisions
Directory of Open Access Journals (Sweden)
Patrick Darius Gajewski
2016-08-01
Full Text Available Decision making in economic context is an everyday activity but its neuronal correlates are poorly understood. The present study aimed at investigating the electrophysiological brain activity during simulated purchase decisions of technical products for a lower or higher price relative to a mean price estimated in a pilot study. Expectedly, participants mostly decided to buy a product when it was cheap and not to buy when it was expensive. But in some trials they made counter-conformity decisions to buy a product for more money than the average price or not to buy a product despite an attractive price. These responses took more time and the variability of the response latency was enhanced relative to conformity responses. ERPs showed enhanced conflict related fronto-central N2 during both types of counter-conformity compared to conformity decisions. A reverse pattern was found for the P3a and P3b. The response-locked P3 (r-P3 was larger and the subsequent CNV smaller for counter-conformity than conformity decisions. We assume that counter-conformity decisions elevate the response threshold (larger N2, intensify response evaluation (r-P3 and attenuate the preparation for the next trial (CNV. These effects were discussed in the framework of the functional role of the fronto-parietal cortex in economic decision making.
New methods in plasma simulation
International Nuclear Information System (INIS)
Mason, R.J.
1990-01-01
The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs
Czech Academy of Sciences Publication Activity Database
Hamarová, Ivana; Šmíd, Petr; Horváth, P.; Hrabovský, M.
2014-01-01
Roč. 2014, č. 1 (2014), "704368-1"-"704368-12" ISSN 1537-744X R&D Projects: GA ČR GA13-12301S Institutional support: RVO:68378271 Keywords : one-dimensional speckle correlation * speckle * general In-plane translation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.219, year: 2013
Simulating Optical Correlation on a Digital Image Processing
Denning, Bryan
1998-04-01
Optical Correlation is a useful tool for recognizing objects in video scenes. In this paper, we explore the characteristics of a composite filter known as the equal correlation peak synthetic discriminant function (ECP SDF). Although the ECP SDF is commonly used in coherent optical correlation systems, the authors simulated the operation of a correlator using an EPIX frame grabber/image processor board to complete this work. Issues pertaining to simulating correlation using an EPIX board will be discussed. Additionally, the ability of the ECP SDF to detect objects that have been subjected to inplane rotation and small scale changes will be addressed by correlating filters against true-class objects placed randomly within a scene. To test the robustness of the filters, the results of correlating the filter against false-class objects that closely resemble the true class will also be presented.
Simulating quantum correlations as a distributed sampling problem
International Nuclear Information System (INIS)
Degorre, Julien; Laplante, Sophie; Roland, Jeremie
2005-01-01
It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states
Efficient simulation of tail probabilities of sums of correlated lognormals
DEFF Research Database (Denmark)
Asmussen, Søren; Blanchet, José; Juneja, Sandeep
We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...
Gastroesophageal reflux - correlation between diagnostic methods
International Nuclear Information System (INIS)
Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa; Lemme, Eponina Maria O.; Martinho, Maria Jose Ribeiro
1999-01-01
A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: a) the difference int he reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)
Total and Direct Correlation Function Integrals from Molecular Simulation of Binary Systems
DEFF Research Database (Denmark)
Wedberg, Nils Hejle Rasmus Ingemar; O’Connell, John P.; Peters, Günther H.J.
2011-01-01
The possibility for obtaining derivative properties for mixtures from integrals of spatial total and direct correlation functions obtained from molecular dynamics simulations is explored. Theoretically well-supported methods are examined to extend simulation radial distribution functions to long...... are consistent with an excess Helmholtz energy model fitted to available simulations. In addition, simulations of water/methanol and water/t-butanol mixtures have been carried out. The method yields results for partial molar volumes, activity coefficient derivatives, and individual correlation function integrals...... in reasonable agreement with smoothed experimental data. The proposed method for obtaining correlation function integrals is shown to perform at least as well as or better than two previously published approaches....
Generalized canonical correlation analysis of matrices with missing rows : A simulation study
van de Velden, Michel; Bijmolt, Tammo H. A.
A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll's (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the
Quantum simulation of strongly correlated condensed matter systems
Hofstetter, W.; Qin, T.
2018-04-01
We review recent experimental and theoretical progress in realizing and simulating many-body phases of ultracold atoms in optical lattices, which gives access to analog quantum simulations of fundamental model Hamiltonians for strongly correlated condensed matter systems, such as the Hubbard model. After a general introduction to quantum gases in optical lattices, their preparation and cooling, and measurement techniques for relevant observables, we focus on several examples, where quantum simulations of this type have been performed successfully during the past years: Mott-insulator states, itinerant quantum magnetism, disorder-induced localization and its interplay with interactions, and topological quantum states in synthetic gauge fields.
Partial distance correlation with methods for dissimilarities
Székely, Gábor J.; Rizzo, Maria L.
2014-01-01
Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...
Degeneracy and long-range correlation: A simulation study
Directory of Open Access Journals (Sweden)
Marmelat Vivien
2011-12-01
Full Text Available We present in this paper a simulation study that aimed at evidencing a causal relationship between degeneracy and long-range correlations. Long-range correlations represent a very specific form of fluctuations that have been evidenced in the outcomes time series produced by a number of natural systems. Long-range correlations are supposed to sign the complexity, adaptability and flexibility of the system. Degeneracy is defined as the ability of elements that are structurally different to perform the same function, and is presented as a key feature for explaining the robustness of complex systems. We propose a model able to generate long-range correlated series, and including a parameter that account for degeneracy. Results show that a decrease in degeneracy tends to reduce the strength of long-range correlation in the series produced by the model.
Monte Carlo burnup codes acceleration using the correlated sampling method
International Nuclear Information System (INIS)
Dieudonne, C.
2013-01-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr
Jealousy: novel methods and neural correlates.
Harmon-Jones, Eddie; Peterson, Carly K; Harris, Christine R
2009-02-01
Because of the difficulties surrounding the evocation of jealousy, past research has relied on reactions to hypothetical scenarios and recall of past experiences of jealousy. Both methodologies have limitations, however. The present research was designed to develop a method of evoking jealousy in the laboratory that would be well controlled, ethically permissible, and psychologically meaningful. Study 1 demonstrated that jealousy could be evoked in a modified version of K. D. Williams' (2007) Cyberball ostracism paradigm in which the rejecting person was computer-generated. Study 2, the first to examine neural activity during the active experience of jealousy, tested whether experienced jealousy was associated with greater relative left or right frontal cortical activation. The findings revealed that the experience of jealousy was correlated with greater relative left frontal cortical activation toward the "sexually" desired partner. This pattern of activation suggests that jealousy is associated with approach motivation. Taken together, the present studies developed a laboratory paradigm for the study of jealousy that should help foster research on one of the most social of emotions. (c) 2009 APA, all rights reserved
Total Correlation Function Integrals and Isothermal Compressibilities from Molecular Simulations
DEFF Research Database (Denmark)
Wedberg, Rasmus; Peters, Günther H.j.; Abildskov, Jens
2008-01-01
Generation of thermodynamic data, here compressed liquid density and isothermal compressibility data, using molecular dynamics simulations is investigated. Five normal alkane systems are simulated at three different state points. We compare two main approaches to isothermal compressibilities: (1...... in approximately the same amount of time. This suggests that computation of total correlation function integrals is a route to isothermal compressibility, as accurate and fast as well-established benchmark techniques. A crucial step is the integration of the radial distribution function. To obtain sensible results...
High correlation between performance on a virtual-reality simulator and real-life cataract surgery
DEFF Research Database (Denmark)
Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif
2017-01-01
-tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). CONCLUSION: Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data......PURPOSE: To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. METHODS: Cataract surgeons with varying levels of experience were included in the study. All...... antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. RESULTS: Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion...
Long range correlations, event simulation and parton percolation
International Nuclear Information System (INIS)
Pajares, C.
2011-01-01
We study the RHIC data on long range rapidity correlations, comparing their main trends with different string model simulations. Particular attention is paid to color percolation model and its similarities with color glass condensate. As both approaches corresponds, at high density, to a similar physical picture, both of them give rise to a similar behavior on the energy and the centrality of the main observables. Color percolation explains the transition from low density to high density.
Numerical methods used in simulation
International Nuclear Information System (INIS)
Caseau, Paul; Perrin, Michel; Planchard, Jacques
1978-01-01
The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr
Methods for simulating turbulent phase screen
International Nuclear Information System (INIS)
Zhang Jianzhu; Zhang Feizhou; Wu Yi
2012-01-01
Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)
Numerical simulations of topological and correlated quantum matter
Energy Technology Data Exchange (ETDEWEB)
Assaad, Fakher F. [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik
2016-11-01
The complexity of the solid state does not allow us to carry out simulations of correlated materials without adopting approximation schemes. In this project we are tackling this daunting task with complementary techniques. On one hand one can start with density functional theory in the local density approximation and then add dynamical local interactions using the so called dynamical mean-field approximation. This approach has the merit of being material dependent in the sense that it is possible to include the specific chemical constituents of the material under investigation. Progress in this domain is described below. Another venue is to concentrate on phenomena occurring in a class of materials. Here, the strategy is to define models which one can simulate in polynomial time on supercomputing architectures, and which reproduce the phenomena under investigation. This route has been remarkably successful, and we are now in a position to provide controlled model calculations which can cope with antiferromagnetic fluctuations in metals, or nematic instabilities of fermi liquids. Both phenomena are crucial for our understanding of high temperature superconductivity in the cuprates and the pnictides. Access to the LRZ supercomputing center was imperative during the current grant period to do the relevant simulations on a wide range of topics on correlated electrons. In all cases access to supercomputing facilities allows to carry out simulations on larger and larger system sizes so as to be able to extrapolate to the thermodynamic limit relevant for the understanding of experiments and collective phenomena.
Detector Simulation: Data Treatment and Analysis Methods
Apostolakis, J
2011-01-01
Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...
Isogeometric methods for numerical simulation
Bordas, Stéphane
2015-01-01
The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.
Partial correlation analysis method in ultrarelativistic heavy-ion collisions
Olszewski, Adam; Broniowski, Wojciech
2017-11-01
We argue that statistical data analysis of two-particle longitudinal correlations in ultrarelativistic heavy-ion collisions may be efficiently carried out with the technique of partial covariance. In this method, the spurious event-by-event fluctuations due to imprecise centrality determination are eliminated via projecting out the component of the covariance influenced by the centrality fluctuations. We bring up the relationship of the partial covariance to the conditional covariance. Importantly, in the superposition approach, where hadrons are produced independently from a collection of sources, the framework allows us to impose centrality constraints on the number of sources rather than hadrons, that way unfolding of the trivial fluctuations from statistical hadronization and focusing better on the initial-state physics. We show, using simulated data from hydrodynamics followed with statistical hadronization, that the technique is practical and very simple to use, giving insight into the correlations generated in the initial stage. We also discuss the issues related to separation of the short- and long-range components of the correlation functions and show that in our example the short-range component from the resonance decays is largely reduced by considering pions of the same sign. We demonstrate the method explicitly on the cases where centrality is determined with a single central control bin or with two peripheral control bins.
Fast methods for spatially correlated multilevel functional data
Staicu, A.-M.
2010-01-19
We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.
a Task-Oriented Disaster Information Correlation Method
Linyao, Q.; Zhiqiang, D.; Qing, Z.
2015-07-01
With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.
Limitations of correlation-based redatuming methods
Barrera P, D. F.; Schleicher, J.; van der Neut, J.
2017-12-01
Redatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surface-seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green’s theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surface-seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available.
Spectral Methods in Numerical Plasma Simulation
DEFF Research Database (Denmark)
Coutsias, E.A.; Hansen, F.R.; Huld, T.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...
Evaluation of structural reliability using simulation methods
Directory of Open Access Journals (Sweden)
Baballëku Markel
2015-01-01
Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.
2-d Simulations of Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm
2004-01-01
One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...
Novel Methods for Electromagnetic Simulation and Design
2016-08-03
modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
Correlation of simulated TEM images with irradiation induced damage
International Nuclear Information System (INIS)
Schaeublin, R.; Almeida, P. de; Almazouzi, A.; Victoria, M.
2000-01-01
Crystal damage induced by irradiation is investigated using transmission electron microscopy (TEM) coupled to molecular dynamics (MD) calculations. The displacement cascades are simulated for energies ranging from 10 to 50 keV in Al, Ni and Cu and for times of up to a few tens of picoseconds. Samples are then used to perform simulations of the TEM images that one could observe experimentally. Diffraction contrast is simulated using a method based on the multislice technique. It appears that the cascade induced damage in Al imaged in weak beam exhibits little contrast, which is too low to be experimentally visible, while in Ni and Cu a good contrast is observed. The number of visible clusters is always lower than the actual one. Conversely, high resolution TEM (HRTEM) imaging allows most of the defects contained in the sample to be observed, although experimental difficulties arise due to the low contrast intensity of the smallest defects. Single point defects give rise in HTREM to a contrast that is similar to that of cavities. TEM imaging of the defects is discussed in relation to the actual size of the defects and to the number of clusters deduced from MD simulations
Simulation teaching method in Engineering Optics
Lu, Qieni; Wang, Yi; Li, Hongbin
2017-08-01
We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.
Hybrid Method Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye
This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...
A Simulation Method Measuring Psychomotor Nursing Skills.
McBride, Helena; And Others
1981-01-01
The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.
2015-01-07
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
Finite element formulation for a digital image correlation method
International Nuclear Information System (INIS)
Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei
2005-01-01
A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust
Nuclear spin measurement using the angular correlation method
International Nuclear Information System (INIS)
Schapira, J.-P.
The double angular correlation method is defined by a semi-classical approach (Biendenharn). The equivalence formula in quantum mechanics are discussed for coherent and incoherent angular momentum mixing; the correlations are described from the density and efficiency matrices (Fano). The ambiguities in double angular correlations can be sometimes suppressed (emission of particles with a high orbital momentum l), using triple correlations between levels with well defined spin and parity. Triple correlations are applied to the case where the direction of linear polarization of γ-rays is detected [fr
Simulation methods for nuclear production scheduling
International Nuclear Information System (INIS)
Miles, W.T.; Markel, L.C.
1975-01-01
Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references
Branciard, Cyril; Gisin, Nicolas
2011-07-08
The simulation of quantum correlations with finite nonlocal resources, such as classical communication, gives a natural way to quantify their nonlocality. While multipartite nonlocal correlations appear to be useful resources, very little is known on how to simulate multipartite quantum correlations. We present a protocol that reproduces tripartite Greenberger-Horne-Zeilinger correlations with bounded communication: 3 bits in total turn out to be sufficient to simulate all equatorial Von Neumann measurements on the tripartite Greenberger-Horne-Zeilinger state.
Lagrangian numerical methods for ocean biogeochemical simulations
Paparella, Francesco; Popolizio, Marina
2018-05-01
We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Simulating colloid hydrodynamics with lattice Boltzmann methods
International Nuclear Information System (INIS)
Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J
2004-01-01
We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy
An improved method for simulating radiographs
International Nuclear Information System (INIS)
Laguna, G.W.
1986-01-01
The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials
Numerical Simulation of the Heston Model under Stochastic Correlation
Directory of Open Access Journals (Sweden)
Long Teng
2017-12-01
Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.
Spectral methods in numerical plasma simulation
International Nuclear Information System (INIS)
Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)
Electromagnetic simulation using the FDTD method
Sullivan, Dennis M
2013-01-01
A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp
Development of digital image correlation method to analyse crack ...
Indian Academy of Sciences (India)
samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.
Directory of Open Access Journals (Sweden)
Kolgotin Alexei
2016-01-01
Full Text Available Correlation relationships between aerosol microphysical parameters and optical data are investigated. The results show that surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99 for arbitrary particle size distribution. The correlation relationships that we obtained can be used as constraints in our inversion of optical lidar data. Simulation studies demonstrate a significant stabilization of aerosol microphysical data products if we apply the gradient correlation method in our traditional regularization technique.
International Nuclear Information System (INIS)
Morales, J.J.; Nuevo, J.M.; Rull, L.F.
1987-01-01
The new isothermic-isobaric MD(T,p,N) method of Nose and Hoover is applied in Molecular Dynamics simulations to both liquid and solid near the phase transition. We tested for an appropriate value of the isobaric friction coefficient before calculating the correlation length in the liquid and the disclinations per particle in solid on a big system of 2304 particles. The results are compared with those obtained by traditional MD simulation (E,V,N). (author)
Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals
Directory of Open Access Journals (Sweden)
H. H. Chen
2012-06-01
Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.
Meshless Method for Simulation of Compressible Flow
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow
Quantum control with NMR methods: Application to quantum simulations
International Nuclear Information System (INIS)
Negrevergne, Camille
2002-01-01
Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr
On the boundary conditions and optimization methods in integrated digital image correlation
Kleinendorst, S.M.; Verhaegh, B.J.; Hoefnagels, J.P.M.; Ruybalid, A.; van der Sluis, O.; Geers, M.G.D.; Lamberti, L.; Lin, M.-T.; Furlong, C.; Sciammarella, C.
2018-01-01
In integrated digital image correlation (IDIC) methods attention must be paid to the influence of using a correct geometric and material model, but also to make the boundary conditions in the FE simulation match the real experiment. Another issue is the robustness and convergence of the IDIC
A new method for simulating human emotions
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
How to make machines express emotions would be instrumental in establishing a completely new paradigm for man ma-chine interaction. A new method for simulating and assessing artificial psychology has been developed for the research of the emo-tion robot. The human psychology activity is regarded as a Markov process. An emotion space and psychology model is constructedbased on Markov process. The conception of emotion entropy is presented to assess the artificial emotion complexity. The simulatingresults play up to human psychology activity. This model can also be applied to consumer-friendly human-computer interfaces, andinteractive video etc.
Comparison of validation methods for forming simulations
Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus
2018-05-01
The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.
Atmospheric pollution measurement by optical cross correlation methods - A concept
Fisher, M. J.; Krause, F. R.
1971-01-01
Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.
Correlation between different methods of intra- abdominal pressure ...
African Journals Online (AJOL)
This study aimed to determine the correlation between transvesical ... circumstances may arise where this method is not viable and alternative methods ..... The polycompartment syndrome: A concise state-of-the- art review. ... hypertension in a mixed population of critically ill patients: A multiple-center epidemiological study.
A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising
Directory of Open Access Journals (Sweden)
Can He
2015-01-01
Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.
Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard
2014-01-01
The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.
Correlations between different methods of UO2 pellet density measurement
International Nuclear Information System (INIS)
Yanagisawa, Kazuaki
1977-07-01
Density of UO 2 pellets was measured by three different methods, i.e., geometrical, water-immersed and meta-xylene immersed and treated statistically, to find out the correlations between UO 2 pellets are of six kinds but with same specifications. The correlations are linear 1 : 1 for pellets of 95% theoretical densities and above, but such do not exist below the level and variated statistically due to interaction between open and close pores. (auth.)
Isotope correlations for safeguards surveillance and accountancy methods
International Nuclear Information System (INIS)
Persiani, P.J.; Kalimullah.
1982-01-01
Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The ICT allows the verification of: fabricator's uranium and plutonium content specifications, shipper/receiver differences between fabricator output and reactor input, reactor plant inventory changes, reprocessing batch specifications and shipper/receiver differences between reactor output and reprocessing plant input. The investigation indicates that there exist predictable functional relationships (i.e. correlations) between isotopic concentrations over a range of burnup. Several cross-correlations serve to establish the initial fuel assembly-averaged compositions. The selection of the more effective correlations will depend not only on the level of reliability of ICT for verification, but also on the capability, accuracy and difficulty of developing measurement methods. The propagation of measurement errors through the correlations have been examined to identify the sensitivity of the isotope correlations to measurement errors, and to establish criteria for measurement accuracy in the development and selection of measurement methods. 6 figures, 3 tables
Novel hybrid optical correlator: theory and optical simulation.
Casasent, D; Herold, R L
1975-02-01
The inverse transform of the product of two Fourier transform holograms is analyzed and shown to contain the correlation of the two images from which the holograms were formed. The theory, analysis, and initial experimental demonstration of the feasibility of a novel correlation scheme using this multiplied Fourier transform hologram system are presented.
Isotope correlations for safeguards surveillance and accountancy methods
International Nuclear Information System (INIS)
Persiani, P.J.; Kalimullah.
1983-01-01
Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The US/DOE/OSS Isotope Correlations for Surveillance and Accountancy Methods (ICSAM) program has been structured into three phases: (1) the analytical development of Isotope Correlation Technique (ICT) for actual power reactor fuel cycles; (2) the development of a dedicated portable ICT computer system for in-field implementation, and (3) the experimental program for measurement of U, Pu isotopics in representative spent fuel-rods of the initial 3 or 4 burnup cycles of the Commonwealth Edison Zion -1 and -2 PWR power plants. Since any particular correlation could generate different curves depending upon the type and positioning of the fuel assembly, a 3-D reactor model and 2-group cross section depletion calculation for the first cycle of the ZION-2 was performed with each fuel assembly as a depletion block. It is found that for a given PWR all assemblies with a unique combination of enrichment zone and number of burnable poison rods (BPRs) generate one coincident curve. Some correlations are found to generate a single curve for assemblies of all enrichments and number of BPRs. The 8 axial segments of the 3-D calculation generate one coincident curve for each correlation. For some correlations the curve for the full assembly homogenized over core-height deviates from the curve for the 8 axial segments, and for other correlations coincides with the curve for the segments. The former behavior is primarily based on the transmutation lag between the end segment and the middle segments. The experimental implication is that the isotope correlations exhibiting this behavior can be determined by dissolving a full assembly but not by dissolving only an axial segment, or pellets
Distance correlation methods for discovering associations in large astrophysical databases
International Nuclear Information System (INIS)
Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P.
2014-01-01
High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.
Hallin, Karin; Häggström, Marie; Bäckström, Britt; Kristiansen, Lisbeth Porskrog
2016-01-01
Background: Health care educators account for variables affecting patient safety and are responsible for developing the highly complex process of education planning. Clinical judgement is a multidimensional process, which may be affected by learning styles. The aim was to explore three specific hypotheses to test correlations between nursing students’ team achievements in clinical judgement and emotional, sociological and physiological learning style preferences. Methods: A descriptive cross-sectional study was conducted with Swedish university nursing students in 2012-2013. Convenience sampling was used with 60 teams with 173 nursing students in the final semester of a three-year Bachelor of Science in nursing programme. Data collection included questionnaires of personal characteristics, learning style preferences, determined by the Dunn and Dunn Productivity Environmental Preference Survey, and videotaped complex nursing simulation scenarios. Comparison with Lasater Clinical Judgement Rubric and Non-parametric analyses were performed. Results: Three significant correlations were found between the team achievements and the students’ learning style preferences: significant negative correlation with ‘Structure’ and ‘Kinesthetic’ at the individual level, and positive correlation with the ‘Tactile’ variable. No significant correlations with students’ ‘Motivation’, ‘Persistence’, ‘Wish to learn alone’ and ‘Wish for an authoritative person present’ were seen. Discussion and Conclusion: There were multiple complex interactions between the tested learning style preferences and the team achievements of clinical judgement in the simulation room, which provides important information for the becoming nurses. Several factors may have influenced the results that should be acknowledged when designing further research. We suggest conducting mixed methods to determine further relationships between team achievements, learning style preferences
3D Rigid Registration by Cylindrical Phase Correlation Method
Czech Academy of Sciences Publication Activity Database
Bican, Jakub; Flusser, Jan
2009-01-01
Roč. 30, č. 10 (2009), s. 914-921 ISSN 0167-8655 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/1593 Grant - others:GAUK(CZ) 48908 Institutional research plan: CEZ:AV0Z10750506 Keywords : 3D registration * correlation methods * Image registration Subject RIV: BD - Theory of Information Impact factor: 1.303, year: 2009 http://library.utia.cas.cz/separaty/2009/ZOI/bican-3d digit registration by cylindrical phase correlation method.pdf
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Computational Simulations and the Scientific Method
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Data analytics using canonical correlation analysis and Monte Carlo simulation
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Method of vacuum correlation functions: Results and prospects
International Nuclear Information System (INIS)
Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.
2006-01-01
Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s
Improvement of correlated sampling Monte Carlo methods for reactivity calculations
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Asaoka, Takumi
1978-01-01
Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2015-05-01
This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.
Directory of Open Access Journals (Sweden)
Kaushikbhai C. Parmar
2017-04-01
Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.
Hallin, Karin; Haggstrom, Marie; Backstrom, Britt; Kristiansen, Lisbeth Porskrog
2015-09-28
Health care educators account for variables affecting patient safety and are responsible for developing the highly complex process of education planning. Clinical judgement is a multidimensional process, which may be affected by learning styles. The aim was to explore three specific hypotheses to test correlations between nursing students' team achievements in clinical judgement and emotional, sociological and physiological learning style preferences. A descriptive cross-sectional study was conducted with Swedish university nursing students in 2012-2013. Convenience sampling was used with 60 teams with 173 nursing students in the final semester of a three-year Bachelor of Science in nursing programme. Data collection included questionnaires of personal characteristics, learning style preferences, determined by the Dunn and Dunn Productivity Environmental Preference Survey, and videotaped complex nursing simulation scenarios. Comparison with Lasater Clinical Judgement Rubric and Non-parametric analyses were performed. Three significant correlations were found between the team achievements and the students' learning style preferences: significant negative correlation with 'Structure' and 'Kinesthetic' at the individual level, and positive correlation with the 'Tactile' variable. No significant correlations with students' 'Motivation', 'Persistence', 'Wish to learn alone' and 'Wish for an authoritative person present' were seen. There were multiple complex interactions between the tested learning style preferences and the team achievements of clinical judgement in the simulation room, which provides important information for the becoming nurses. Several factors may have influenced the results that should be acknowledged when designing further research. We suggest conducting mixed methods to determine further relationships between team achievements, learning style preferences, cognitive learning outcomes and group processes.
Fast electronic structure methods for strongly correlated molecular systems
International Nuclear Information System (INIS)
Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung
2005-01-01
A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given
Tracing Method with Intra and Inter Protocols Correlation
Directory of Open Access Journals (Sweden)
Marin Mangri
2009-05-01
Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.
Correlation expansion: a powerful alternative multiple scattering calculation method
International Nuclear Information System (INIS)
Zhao Haifeng; Wu Ziyu; Sebilleau, Didier
2008-01-01
We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion
Local Field Response Method Phenomenologically Introducing Spin Correlations
Tomaru, Tatsuya
2018-03-01
The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.
Numerical method for IR background and clutter simulation
Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio
1997-06-01
The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.
Total focusing method with correlation processing of antenna array signals
Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.
2018-03-01
The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.
Correlation of energy balance method to dynamic pipe rupture analysis
International Nuclear Information System (INIS)
Kuo, H.H.; Durkee, M.
1983-01-01
When using an energy balance approach in the design of pipe rupture restraints for nuclear power plants, the NRC specifies in its Standard Review Plan 3.6.2 that the input energy to the system must be multiplied by a factor of 1.1 unless a lower value can be justified. Since the energy balance method is already quite conservative, an across-the-board use of 1.1 to amplify the energy input appears unneccessary. The paper's purpose is to show that this 'correlation factor' could be substantially less than unity if certain design parameters are met. In this paper, result of nonlinear dynamic analyses were compared to the results of the corresponding analyses based on the energy balance method which assumes constant blowdown forces and rigid plastic material properties. The appropriate correlation factors required to match the energy balance results with the dynamic analyses results were correlated to design parameters such as restraint location from the break, yield strength of the energy absorbing component, and the restraint gap. It is shown that the correlation factor is related to a single nondimensional design parameter and can be limited to a value below unity if appropriate design parameters are chosen. It is also shown that the deformation of the restraints can be related to dimensionless system parameters. This, therefore, allows the maximum restraint deformation to be evaluated directly for design purposes. (orig.)
Neurocognitive Correlates of Young Drivers' Performance in a Driving Simulator.
Guinosso, Stephanie A; Johnson, Sara B; Schultheis, Maria T; Graefe, Anna C; Bishai, David M
2016-04-01
Differences in neurocognitive functioning may contribute to driving performance among young drivers. However, few studies have examined this relation. This pilot study investigated whether common neurocognitive measures were associated with driving performance among young drivers in a driving simulator. Young drivers (19.8 years (standard deviation [SD] = 1.9; N = 74)) participated in a battery of neurocognitive assessments measuring general intellectual capacity (Full-Scale Intelligence Quotient, FSIQ) and executive functioning, including the Stroop Color-Word Test (cognitive inhibition), Wisconsin Card Sort Test-64 (cognitive flexibility), and Attention Network Task (alerting, orienting, and executive attention). Participants then drove in a simulated vehicle under two conditions-a baseline and driving challenge. During the driving challenge, participants completed a verbal working memory task to increase demand on executive attention. Multiple regression models were used to evaluate the relations between the neurocognitive measures and driving performance under the two conditions. FSIQ, cognitive inhibition, and alerting were associated with better driving performance at baseline. FSIQ and cognitive inhibition were also associated with better driving performance during the verbal challenge. Measures of cognitive flexibility, orienting, and conflict executive control were not associated with driving performance under either condition. FSIQ and, to some extent, measures of executive function are associated with driving performance in a driving simulator. Further research is needed to determine if executive function is associated with more advanced driving performance under conditions that demand greater cognitive load. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Estimation of velocity vector angles using the directional cross-correlation method
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt
2006-01-01
and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and a circulating flow rig with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions...... transducer is used with a normal transmission of a focused ultrasound field. In the simulations the relative standard deviation of the velocity magnitude is between 0.7% and 7.7% for flow angles between 45 deg and 90 deg. The study showed that angle estimation by directional beamforming can be estimated...
Feasibility of the correlation curves method in calorimeters of different types
Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.
2014-01-01
The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...
International Nuclear Information System (INIS)
Fiebig, H. Rudolf
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach
Generating Correlated Gamma Sequences for Sea-Clutter Simulation
2012-03-01
generation of correlated Gamma random fields via SIRP theory is examined in [Conte et al. 1991, Armstrong & Griffiths 1991]. In these papers , the Gamma...2 〉2 + |〈x[n]x∗[n+ k]〉|2 . (4) Because 〈 |x|2 〉2 = z̄2 and |〈x[n]x∗[n+ k]〉|2 ≥ 0, this results in 〈z[n]z[n+ k]〉 ≥ z̄2 if the real- isation of z[n] is...linear map- ping. In a practical situation, a process with a given auto-covariance function would be specified. It is shown that by using an
Multiscale correlations in highly resolved Large Eddy Simulations
Biferale, Luca; Buzzicotti, Michele; Linkmann, Moritz
2017-11-01
Understanding multiscale turbulent statistics is one of the key challenges for many modern applied and fundamental problems in fluid dynamics. One of the main obstacles is the existence of anomalously strong non Gaussian fluctuations, which become more and more important with increasing Reynolds number. In order to assess the performance of LES models in reproducing these extreme events with reasonable accuracy, it is helpful to further understand the statistical properties of the coupling between the resolved and the subgrid scales. We present analytical and numerical results focussing on the multiscale correlations between the subgrid stress and the resolved velocity field obtained both from LES and filtered DNS data. Furthermore, a comparison is carried out between LES and DNS results concerning the scaling behaviour of higher-order structure functions using both Smagorinsky or self-similar Fourier sub-grid models. ERC AdG Grant No 339032 NewTURB.
Numerical methods in simulation of resistance welding
DEFF Research Database (Denmark)
Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi
2015-01-01
Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...
Virtual Crowds Methods, Simulation, and Control
Pelechano, Nuria; Allbeck, Jan
2008-01-01
There are many applications of computer animation and simulation where it is necessary to model virtual crowds of autonomous agents. Some of these applications include site planning, education, entertainment, training, and human factors analysis for building evacuation. Other applications include simulations of scenarios where masses of people gather, flow, and disperse, such as transportation centers, sporting events, and concerts. Most crowd simulations include only basic locomotive behaviors possibly coupled with a few stochastic actions. Our goal in this survey is to establish a baseline o
Petascale Many Body Methods for Complex Correlated Systems
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
Nuclear material enrichment identification method based on cross-correlation and high order spectra
International Nuclear Information System (INIS)
Yang Fan; Wei Biao; Feng Peng; Mi Deling; Ren Yong
2013-01-01
In order to enhance the sensitivity of nuclear material identification system (NMIS) against the change of nuclear material enrichment, the principle of high order statistic feature is introduced and applied to traditional NMIS. We present a new enrichment identification method based on cross-correlation and high order spectrum algorithm. By applying the identification method to NMIS, the 3D graphs with nuclear material character are presented and can be used as new signatures to identify the enrichment of nuclear materials. The simulation result shows that the identification method could suppress the background noises, electronic system noises, and improve the sensitivity against enrichment change to exponential order with no system structure modification. (authors)
Hardware in the loop simulation of arbitrary magnitude shaped correlated radar clutter
CSIR Research Space (South Africa)
Strydom, JJ
2014-10-01
Full Text Available This paper describes a simple process for the generation of arbitrary probability distributions of complex data with correlation from sample to sample, optimized for hardware in the loop radar environment simulation. Measured radar clutter is used...
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
Methods for converging correlation energies within the dielectric matrix formalism
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Determination of velocity vector angles using the directional cross-correlation method
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt
2005-01-01
and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array transducer is used......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions......-time ) between signals to correlate, and a proper choice varies with flow angle and flow velocity. One performance example is given with a fixed value of k tprf for all flow angles. The angle estimation on measured data for flow at 60 ◦ to 90 ◦ , yields a probability of valid estimates between 68% and 98...
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Research methods of simulate digital compensators and autonomous control systems
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2016-01-01
Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.
Collaborative simulation method with spatiotemporal synchronization process control
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
Simulation of a directed random-walk model: the effect of pseudo-random-number correlations
Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.
1996-01-01
We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
.... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...
Correlations between technical skills and behavioral skills in simulated neonatal resuscitations.
Sawyer, T; Leonard, D; Sierocka-Castaneda, A; Chan, D; Thompson, M
2014-10-01
Neonatal resuscitation requires both technical and behavioral skills. Key behavioral skills in neonatal resuscitation have been identified by the Neonatal Resuscitation Program. Correlations and interactions between technical skills and behavioral skills in neonatal resuscitation were investigated. Behavioral skills were evaluated via blinded video review of 45 simulated neonatal resuscitations using a validated assessment tool. These were statistically correlated with previously obtained technical skill performance data. Technical skills and behavioral skills were strongly correlated (ρ=0.48; P=0.001). The strongest correlations were seen in distribution of workload (ρ=0.60; P=0.01), utilization of information (ρ=0.55; P=0.03) and utilization of resources (ρ=0.61; P=0.01). Teams with superior behavioral skills also demonstrated superior technical skills, and vice versa. Technical and behavioral skills were highly correlated during simulated neonatal resuscitations. Individual behavioral skill correlations are likely dependent on both intrinsic and extrinsic factors.
[Lack of correlation between performances in a simulator and in reality].
Konge, Lars; Bitsch, Mikael
2010-12-13
Simulation-based training provides obvious benefits for patients and doctors in education. Frequently, virtual reality simulators are expensive and evidence for their efficacy is poor, particularly as a result of studies with poor methodology and few test participants. In medical simulated training- and evaluation programmes it is always a question of transfer to the real clinical world. To illustrate this problem a study comparing the test performance of persons on a bowling simulator with their performance in a real bowling alley was conducted. Twenty-five test subjects played two rounds of bowling on a Nintendo Wii and 25 days later on a real bowling alley. Correlations of the scores in the first and second round (test-retest-reliability) and of the scores on the simulator and in reality (criterion validation) were studied and there was tested for any difference between female and male performance. The intraclass correlation coefficient equalled 0.76, i.e. the simulator fairly accurately measured participant performance. In contrast to this there was absolutely no correlation between participants' real bowling abilities and their scores on the simulator (Pearson's r = 0.06). There was no significant difference between female and male abilities. Simulation-based testing and training must be based on evidence. More studies are needed to include an adequate number of subjects. Bowling competence should not be based on Nintendo Wii measurements. Simulated training- and evaluation programmes should be validated before introduction, to ensure consistency with the real world.
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
A three-dimensional correlation method for registration of medical images in radiology
International Nuclear Information System (INIS)
Georgiou, Michalakis; Sfakianakis, George N.; Nagel, Joachim H.
1998-01-01
The availability of methods to register multi-modality images in order to 'fuse' them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as p olar Shells . The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors)
Interactive methods for exploring particle simulation data
Energy Technology Data Exchange (ETDEWEB)
Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.
2004-05-01
In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.
Hospital Registration Process Reengineering Using Simulation Method
Directory of Open Access Journals (Sweden)
Qiang Su
2010-01-01
Full Text Available With increasing competition, many healthcare organizations have undergone tremendous reform in the last decade aiming to increase efficiency, decrease waste, and reshape the way that care is delivered. This study focuses on the operational efficiency improvement of hospital’s registration process. The operational efficiency related factors including the service process, queue strategy, and queue parameters were explored systematically and illustrated with a case study. Guided by the principle of business process reengineering (BPR, a simulation approach was employed for process redesign and performance optimization. As a result, the queue strategy is changed from multiple queues and multiple servers to single queue and multiple servers with a prepare queue. Furthermore, through a series of simulation experiments, the length of the prepare queue and the corresponding registration process efficiency was quantitatively evaluated and optimized.
Numerical simulation methods for electron and ion optics
International Nuclear Information System (INIS)
Munro, Eric
2011-01-01
This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
Benchmarking HRA methods against different NPP simulator data
International Nuclear Information System (INIS)
Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta
2008-01-01
The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...
Bates, Nathaniel A.; Nesbitt, Rebecca J.; Shearn, Jason T.; Myer, Gregory D.; Hewett, Timothy E.
2017-01-01
Background Tibial slope angle is a nonmodifiable risk factor for anterior cruciate ligament (ACL) injury. However, the mechanical role of varying tibial slopes during athletic tasks has yet to be clinically quantified. Purpose To examine the influence of posterior tibial slope on knee joint loading during controlled, in vitro simulation of the knee joint articulations during athletic tasks. Study Design Descriptive laboratory study. Methods A 6 degree of freedom robotic manipulator positionally maneuvered cadaveric knee joints from 12 unique specimens with varying tibial slopes (range, −7.7° to 7.7°) through drop vertical jump and sidestep cutting tasks that were derived from 3-dimensional in vivo motion recordings. Internal knee joint torques and forces were recorded throughout simulation and were linearly correlated with tibial slope. Results The mean (6SD) posterior tibial slope angle was 2.2° ± 4.3° in the lateral compartment and 2.3° ± 3.3° in the medial compartment. For simulated drop vertical jumps, lateral compartment tibial slope angle expressed moderate, direct correlations with peak internally generated knee adduction (r = 0.60–0.65), flexion (r = 0.64–0.66), lateral (r = 0.57–0.69), and external rotation torques (r = 0.47–0.72) as well as inverse correlations with peak abduction (r = −0.42 to −0.61) and internal rotation torques (r = −0.39 to −0.79). Only frontal plane torques were correlated during sidestep cutting simulations. For simulated drop vertical jumps, medial compartment tibial slope angle expressed moderate, direct correlations with peak internally generated knee flexion torque (r = 0.64–0.69) and lateral knee force (r = 0.55–0.74) as well as inverse correlations with peak external torque (r = −0.34 to 20.67) and medial knee force (r = −0.58 to −0.59). These moderate correlations were also present during simulated sidestep cutting. Conclusion The investigation supported the theory that increased posterior
Steam generator tube rupture simulation using extended finite element method
Energy Technology Data Exchange (ETDEWEB)
Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken
2016-08-15
Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.
Steam generator tube rupture simulation using extended finite element method
International Nuclear Information System (INIS)
Mohanty, Subhasish; Majumdar, Saurin; Natesan, Ken
2016-01-01
Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.
Particle-transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.
1975-01-01
Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
Simulation of tunneling construction methods of the Cisumdawu toll road
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Correlation of etho-social and psycho-social data from "Mars-500" interplanetary simulation
Tafforin, Carole; Vinokhodova, Alla; Chekalina, Angelina; Gushin, Vadim
2015-06-01
Studies of social groups under isolation and confinement for the needs of space psychology were mostly limited by questionnaires completed with batteries of subjective tests, and they needed to be correlated with video recordings for objective analyses in space ethology. The aim of the present study is to identify crewmembers' behavioral profiles for better understanding group dynamics during a 520-day isolation and confinement of the international crew (n=6) participating to the "Mars-500" interplanetary simulation. We propose to correlate data from PSPA (Personal Self-Perception and Attitudes) computerized test, sociometric questionnaires and color choices test (Luscher test) used to measure anxiety levels, with data of video analysis during group discussion (GD) and breakfast time (BT). All the procedures were implemented monthly - GD, or twice a month - BT. Firstly, we used descriptive statistics for displaying quantitative subjects' behavioral profiles, supplied with a software based-solution: the Observer XT®. Secondly, we used Spearmen's nonparametric correlation analysis. The results show that for each subject, the level of non-verbal behavior ("visual interactions", "object interactions", "body interaction", "personal actions", "facial expressions", and "collateral acts") is higher than the level of verbal behavior ("interpersonal communication in Russian", and "interpersonal communication in English"). From the video analyses, dynamics profiles over months are different between the crewmembers. From the correlative analyses, we found highly negative correlations between anxiety and interpersonal communications; and between the sociometric parameter "popularity in leisure environment" and anxiety level. We also found highly significant positive correlations between the sociometric parameter "popularity in working environment" and interpersonal communications, and facial expressions; and between the sociometric parameter "popularity in leisure environment
Transforming han: a correlational method for psychology and religion.
Oh, Whachul
2015-06-01
Han is a destructive feeling in Korea. Although Korea accomplished significant exterior growth, Korean society is still experiencing the dark aspects of transforming han as evidenced by having the highest suicide rate in Asia. Some reasons for this may be the fragmentation between North and South Korea. If we can transform han then it can become constructive. I was challenged to think of possibilities for transforming han internally; this brings me to the correlational method through psychological and religious interpretation. This study is to challenge and encourage many han-ridden people in Korean society. Through the psychological and religious understanding of han, people suffering can positively transform their han. They can relate to han more subjectively, and this means the han-ridden psyche has an innate sacredness of potential to transform.
The frontal method in hydrodynamics simulations
Walters, R.A.
1980-01-01
The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.
Li, Ming-Hua; Zhu, Weishan; Zhao, Dong
2018-05-01
The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Natural tracer test simulation by stochastic particle tracking method
International Nuclear Information System (INIS)
Ackerer, P.; Mose, R.; Semra, K.
1990-01-01
Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)
Application of the maximum entropy method to dynamical fermion simulations
Clowser, Jonathan
This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.
Methods employed to speed up Cathare for simulation uses
International Nuclear Information System (INIS)
Agator, J.M.
1992-01-01
This paper describes the main methods used to speed up the french advanced thermal-hydraulic computer code CATHARE and build a speedy version, called CATHARE-SIMU, adapted to real time calculations and simulation environment. Since CATHARE-SIMU, like CATHARE, uses a numerical scheme based on a fully implicit Newton's iterative method, and therefore with a variable time step, two ways have been explored to reduce the computing time: avoidance of short time steps, and so minimization of the number of iterations per time step, reduction of the computing time needed for an iteration. CATHARE-SIMU uses the same physical laws and correlations as in CATHARE with only some minor simplifications. This was considered the only way to be sure to maintain the level of physical relevance of CATHARE. Finally it is indicated that the validation programme of CATHARE-SIMU includes a set of 33 transient calculations, referring either to CATHARE for two-phase transients, or to measurements on real plants for operational transients
Factorization method for simulating QCD at finite density
International Nuclear Information System (INIS)
Nishimura, Jun
2003-01-01
We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)
Evaluation of full-scope simulator testing methods
Energy Technology Data Exchange (ETDEWEB)
Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.
Evaluation of full-scope simulator testing methods
International Nuclear Information System (INIS)
Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
International Nuclear Information System (INIS)
Chaudhry, I.A.; Mirza, M.R.; Rashid, M.J.
2010-01-01
The innovation in software analysis and various available programming facilities have urged the designers at various levels to do indispensable calculations for engine flows. Presently, the 3-D analysis approach is under practice to do simulations for various parameters involving engine operations using various soft wares, 'Fluent' being the trendiest at the moment for CFD modeling. The present work involves CFD modeling of diesel fuel sprays at a specified angle with cylinder axis. Fuel spray modeling includes sub-models for aerodynamic drag, droplet oscillation and distortion, turbulence effects, droplet breakup, evaporation, and droplet collision and coalescence. The data available from existing published work is used to model the fuel spray and the subsequent simulation results are compared to experimental results to test validity of the proposed models. (author)
International Nuclear Information System (INIS)
Zhang Weigang
2000-01-01
Based on the concept of correlative degree, a new method of high-order collective-flow measurement is constructed, with which azimuthal correlations, correlations of final state transverse momentum magnitude and transverse correlations can be inspected respectively. Using the new method the contributions of the azimuthal correlations of particles distribution and the correlations of transverse momentum magnitude of final state particles to high-order collective-flow correlations are analyzed respectively with 4π experimental events for 1.2 A GeV Ar + BaI 2 collisions at the Bevalac stream chamber. Comparing with the correlations of transverse momentum magnitude, the azimuthal correlations of final state particles distribution dominate high-order collective-flow correlations in experimental samples. The contributions of correlations of transverse momentum magnitude of final state particles not only enhance the strength of the high-order correlations of particle group, but also provide important information for the measurement of the collectivity of collective flow within the more constraint district
New method of fast simulation for a hadron calorimeter response
International Nuclear Information System (INIS)
Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.
2003-01-01
In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data
Daylighting simulation: methods, algorithms, and resources
Energy Technology Data Exchange (ETDEWEB)
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but
Directory of Open Access Journals (Sweden)
Yan Li
2017-11-01
Full Text Available Due to the volatile and correlated nature of wind speed, a high share of wind power penetration poses challenges to power system production simulation. Existing power system probabilistic production simulation approaches are in short of considering the time-varying characteristics of wind power and load, as well as the correlation between wind speeds at the same time, which brings about some problems in planning and analysis for the power system with high wind power penetration. Based on universal generating function (UGF, this paper proposes a novel probabilistic production simulation approach considering wind speed correlation. UGF is utilized to develop the chronological models of wind power that characterizes wind speed correlation simultaneously, as well as the chronological models of conventional generation sources and load. The supply and demand are matched chronologically to not only obtain generation schedules, but also reliability indices both at each simulation interval and the whole period. The proposed approach has been tested on the improved IEEE-RTS 79 test system and is compared with the Monte Carlo approach and the sequence operation theory approach. The results verified the proposed approach with the merits of computation simplicity and accuracy.
A particle-based method for granular flow simulation
Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua
2012-01-01
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
Research on Monte Carlo simulation method of industry CT system
International Nuclear Information System (INIS)
Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan
2010-01-01
There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)
A simple method for potential flow simulation of cascades
Indian Academy of Sciences (India)
vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.
A particle-based method for granular flow simulation
Chang, Yuanzhang
2012-03-16
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Advanced cluster methods for correlated-electron systems
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andre
2015-04-27
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult
Comparing three methods for participatory simulation of hospital work systems
DEFF Research Database (Denmark)
Broberg, Ole; Andersen, Simone Nyholm
Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Simulation methods of nuclear electromagnetic pulse effects in integrated circuits
International Nuclear Information System (INIS)
Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen
2013-01-01
In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments
Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram
2018-01-01
Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
Motion simulation of hydraulic driven safety rod using FSI method
International Nuclear Information System (INIS)
Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In
2013-01-01
Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results
International Nuclear Information System (INIS)
Žukovič, Milan; Hristopulos, Dionissios T
2009-01-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the N c -state Potts model, each point is assigned to one of N c classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of
A three-dimensional correlation method for registration of medical images in radiology
Energy Technology Data Exchange (ETDEWEB)
Georgiou, Michalakis; Sfakianakis, George N [Department of Radiology, University of Miami, Jackson Memorial Hospital, Miami, FL 33136 (United States); Nagel, Joachim H [Institute of Biomedical Engineering, University of Stuttgart, Stuttgart 70174 (Germany)
1999-12-31
The availability of methods to register multi-modality images in order to `fuse` them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as {sup p}olar Shells{sup .} The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors) 6 refs., 3 figs.
A simulation method for lightning surge response of switching power
International Nuclear Information System (INIS)
Wei, Ming; Chen, Xiang
2013-01-01
In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.
Increasing the computational efficient of digital cross correlation by a vectorization method
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
Real time simulation method for fast breeder reactors dynamics
International Nuclear Information System (INIS)
Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.
1985-01-01
The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)
Simulation of plume dynamics by the Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2017-09-01
The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.
Method of simulating dose reduction for digital radiographic systems
International Nuclear Information System (INIS)
Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.
2005-01-01
The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)
A tool for simulating parallel branch-and-bound methods
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
A tool for simulating parallel branch-and-bound methods
Directory of Open Access Journals (Sweden)
Golubeva Yana
2016-01-01
Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
International Nuclear Information System (INIS)
Stelzer, J.; Trebin, H.R.; Longa, L.
1994-08-01
We report NVT and NPT molecular dynamics simulations of a Gay-Berne nematic liquid crystal using generalization of recently proposed algorithm by Toxvaerd [Phys. Rev. E47, 343, 1993]. On the basis of these simulations the Oseen-Zoher-Frank elastic constants K 11 , K 22 and K 33 as well as the surface constants K 13 and K 24 have been calculated within the framework of the direct correlation function approach of Lipkin et al. [J. Chem. Phys. 82, 472 (1985)]. The angular coefficients of the direct pair correlation function, which enter the final formulas, have been determined from the computer simulation data for the pair correlation function of the nematic by combining the Ornstein-Zernike relation and the Wienier-Hopf factorization scheme. The unoriented nematic approximation has been assumed when constructing the reference, isotropic state of Lipkin et al. By an extensive study of the model over a wide range of temperatures, densities and pressures a very detailed information has been provided about elastic behaviour of the Gay-Berne nematic. Interestingly, it is found that the results for the surface elastic constants are qualitatively different than those obtained with the help of analytical approximations for the isotropic, direct pair correlation function. For example, the values of the surface elastic constants are negative and an order of magnitude smaller than the bulk elasticity. (author). 30 refs, 9 figs
Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar
2016-02-01
Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.
Directory of Open Access Journals (Sweden)
Heidi Koldsø
2014-10-01
Full Text Available Cell membranes are complex multicomponent systems, which are highly heterogeneous in the lipid distribution and composition. To date, most molecular simulations have focussed on relatively simple lipid compositions, helping to inform our understanding of in vitro experimental studies. Here we describe on simulations of complex asymmetric plasma membrane model, which contains seven different lipids species including the glycolipid GM3 in the outer leaflet and the anionic lipid, phosphatidylinositol 4,5-bisphophate (PIP2, in the inner leaflet. Plasma membrane models consisting of 1500 lipids and resembling the in vivo composition were constructed and simulations were run for 5 µs. In these simulations the most striking feature was the formation of nano-clusters of GM3 within the outer leaflet. In simulations of protein interactions within a plasma membrane model, GM3, PIP2, and cholesterol all formed favorable interactions with the model α-helical protein. A larger scale simulation of a model plasma membrane containing 6000 lipid molecules revealed correlations between curvature of the bilayer surface and clustering of lipid molecules. In particular, the concave (when viewed from the extracellular side regions of the bilayer surface were locally enriched in GM3. In summary, these simulations explore the nanoscale dynamics of model bilayers which mimic the in vivo lipid composition of mammalian plasma membranes, revealing emergent nanoscale membrane organization which may be coupled both to fluctuations in local membrane geometry and to interactions with proteins.
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the
Plasma simulations using the Car-Parrinello method
International Nuclear Information System (INIS)
Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.
1990-01-01
A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma
Nonequilibrium relaxation method – An alternative simulation strategy
Indian Academy of Sciences (India)
One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...
A direct simulation method for flows with suspended paramagnetic particles
Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.
2008-01-01
A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a
DRK methods for time-domain oscillator simulation
Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.
The afforestation problem: a heuristic method based on simulated annealing
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1992-01-01
This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....
Multilevel panel method for wind turbine rotor flow simulations
van Garrel, Arne
2016-01-01
Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering
LOMEGA: a low frequency, field implicit method for plasma simulation
International Nuclear Information System (INIS)
Barnes, D.C.; Kamimura, T.
1982-04-01
Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)
Performance evaluation of sea surface simulation methods for target detection
Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi
2017-11-01
With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.
Clinical simulation as an evaluation method in health informatics
DEFF Research Database (Denmark)
Jensen, Sanne
2016-01-01
Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...
Architecture oriented modeling and simulation method for combat mission profile
Directory of Open Access Journals (Sweden)
CHEN Xia
2017-05-01
Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.
A nondissipative simulation method for the drift kinetic equation
International Nuclear Information System (INIS)
Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya
2001-07-01
With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)
Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation
De La Garza Martinez, Pablo
2016-05-01
Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.
Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation
De La Garza Martinez, Pablo
2016-01-01
Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.
Adaptive implicit method for thermal compositional reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
Studies in the method of correlated basis functions. Pt. 3
International Nuclear Information System (INIS)
Krotscheck, E.; Clark, J.W.
1980-01-01
A variational theory of pairing phenomena is presented for systems like neutron matter and liquid 3 He. The strong short-range correlations among the particles in these systems are incorporated into the trial states describing normal and pair-condensed phases, via a correlation operator F. The resulting theory has the same basic structure as that ordinarily applied for weak two-body interactions; in place of the pairing matrix elements of the bare interaction one finds certain effective pairing matrix elements Psub(kl), and modified single particle energies epsilon (k) appear. Detailed prescriptions are given for the construction of the Psub(kl) and epsilon (k) in terms of off-diagonal and diagonal matrix elements of the Hamiltonian and unit operators in a correlated basis of normal states. An exact criterion for instability of the assumed normal phase with respect to pair condensation is derived for general F. This criterion is investigated numerically for the special case if Jastrow correlations, the required normal-state quantities being evaluated by integral equation techniques which extend the Fermi hypernetted-chain scheme. In neutron matter, an instability with respect to 1 S 0 pairing is found in the low-density region, in concert with the predictions of Yang and Clark. In liquid 3 He, there is some indication of a 3 P 0 pairing instability in the vicinity of the experimental equilibrium density. (orig.)
Application of digital image correlation method for analysing crack ...
Indian Academy of Sciences (India)
centrated strain by imitating the treatment of micro-cracks using the finite element ... water and moisture to penetrate the concrete leading to serious rust of the ... The correlations among various grey values of digital images are analysed for ...
Activity coefficients from molecular simulations using the OPAS method
Kohns, Maximilian; Horsch, Martin; Hasse, Hans
2017-10-01
A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.
Simulation of the acoustic wave propagation using a meshless method
Directory of Open Access Journals (Sweden)
Bajko J.
2017-01-01
Full Text Available This paper presents numerical simulations of the acoustic wave propagation phenomenon modelled via Linearized Euler equations. A meshless method based on collocation of the strong form of the equation system is adopted. Moreover, the Weighted least squares method is used for local approximation of derivatives as well as stabilization technique in a form of spatial ltering. The accuracy and robustness of the method is examined on several benchmark problems.
Numerical simulation methods for wave propagation through optical waveguides
International Nuclear Information System (INIS)
Sharma, A.
1993-01-01
The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs
Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.
2018-06-01
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.
Roach, D.; Jameson, M. G.; Dowling, J. A.; Ebert, M. A.; Greer, P. B.; Kennedy, A. M.; Watt, S.; Holloway, L. C.
2018-02-01
Many similarity metrics exist for inter-observer contouring variation studies, however no correlation between metric choice and prostate cancer radiotherapy dosimetry has been explored. These correlations were investigated in this study. Two separate trials were undertaken, the first a thirty-five patient cohort with three observers, the second a five patient dataset with ten observers. Clinical and planning target volumes (CTV and PTV), rectum, and bladder were independently contoured by all observers in each trial. Structures were contoured on T2-weighted MRI and transferred onto CT following rigid registration for treatment planning in the first trial. Structures were contoured directly on CT in the second trial. STAPLE and majority voting volumes were generated as reference gold standard volumes for each structure for the two trials respectively. VMAT treatment plans (78 Gy to PTV) were simulated for observer and gold standard volumes, and dosimetry assessed using multiple radiobiological metrics. Correlations between contouring similarity metrics and dosimetry were calculated using Spearman’s rank correlation coefficient. No correlations were observed between contouring similarity metrics and dosimetry for CTV within either trial. Volume similarity correlated most strongly with radiobiological metrics for PTV in both trials, including TCPPoisson (ρ = 0.57, 0.65), TCPLogit (ρ = 0.39, 0.62), and EUD (ρ = 0.43, 0.61) for each respective trial. Rectum and bladder metric correlations displayed no consistency for the two trials. PTV volume similarity was found to significantly correlate with rectum normal tissue complication probability (ρ = 0.33, 0.48). Minimal to no correlations with dosimetry were observed for overlap or boundary contouring metrics. Future inter-observer contouring variation studies for prostate cancer should incorporate volume similarity to provide additional insights into dosimetry during analysis.
Directory of Open Access Journals (Sweden)
Eduardo Borba Neves
2017-11-01
Full Text Available The aim of this study was to investigate the Correlations between the Simulated Military Tasks Performance and Physical Fitness Tests at high altitude. This research is part of a project to modernize the physical fitness test of the Colombian Army. Data collection was performed at the 13th Battalion of Instruction and Training, located 30km south of Bogota D.C., with a temperature range from 1ºC to 23ºC during the study period, and at 3100m above sea level. The sample was composed by 60 volunteers from three different platoons. The volunteers start the data collection protocol after 2 weeks of acclimation at this altitude. The main results were the identification of a high positive correlation between the 3 Assault wall in succession and the Simulated Military Tasks performance (r = 0.764, p<0.001, and a moderate negative correlation between pull-ups and the Simulated Military Tasks performance (r = -0.535, p<0.001. It can be recommended the use of the 20-consecutive overtaking of the 3 Assault wall in succession as a good way to estimate the performance in operational tasks which involve: assault walls, network of wires, military Climbing Nets, Tarzan jump among others, at high altitude.
A new quantum statistical evaluation method for time correlation functions
International Nuclear Information System (INIS)
Loss, D.; Schoeller, H.
1989-01-01
Considering a system of N identical interacting particles, which obey Fermi-Dirac or Bose-Einstein statistics, the authors derive new formulas for correlation functions of the type C(t) = i= 1 N A i (t) Σ j=1 N B j > (where B j is diagonal in the free-particle states) in the thermodynamic limit. Thereby they apply and extend a superoperator formalism, recently developed for the derivation of long-time tails in semiclassical systems. As an illustrative application, the Boltzmann equation value of the time-integrated correlation function C(t) is derived in a straight-forward manner. Due to exchange effects, the obtained t-matrix and the resulting scattering cross section, which occurs in the Boltzmann collision operator, are now functionals of the Fermi-Dirac or Bose-Einstein distribution
Advance in research on aerosol deposition simulation methods
International Nuclear Information System (INIS)
Liu Keyang; Li Jingsong
2011-01-01
A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)
Finite element method for simulation of the semiconductor devices
International Nuclear Information System (INIS)
Zikatanov, L.T.; Kaschiev, M.S.
1991-01-01
An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs
Reliability analysis of neutron transport simulation using Monte Carlo method
International Nuclear Information System (INIS)
Souza, Bismarck A. de; Borges, Jose C.
1995-01-01
This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs
Correlating TEM images of damage in irradiated materials to molecular dynamics simulations
International Nuclear Information System (INIS)
Schaeublin, R.; Caturla, M.-J.; Wall, M.; Felter, T.; Fluss, M.; Wirth, B.D.; Diaz de la Rubia, T.; Victoria, M.
2002-01-01
TEM image simulations are used to couple the results from molecular dynamics (MD) simulations to experimental TEM images. In particular we apply this methodology to the study of defects produced during irradiation. MD simulations have shown that irradiation of FCC metals results in a population of vacancies and interstitials forming clusters. The limitation of these simulations is the short time scales available, on the order of 100 s of picoseconds. Extrapolation of the results from these short times to the time scales of the laboratory has been difficult. We address this problem by two methods: we perform TEM image simulations of MD simulations of cascades with an improved technique, to relate defects produced at short time scales with those observed experimentally at much longer time scales. On the other hand we perform in situ TEM experiments of Au irradiated at liquid-nitrogen temperatures, and study the evolution of the produced damage as the temperature is increased to room temperature. We find that some of the defects observed in the MD simulations at short time scales using the TEM image simulation technique have features that resemble those observed in laboratory TEM images of irradiated samples. In situ TEM shows that stacking fault tetrahedra are present at the lowest temperatures and are stable during annealing up to room temperature, while other defect clusters migrate one dimensionally above -100 deg. C. Results are presented here
Research on neutron noise analysis stochastic simulation method for α calculation
International Nuclear Information System (INIS)
Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang
2014-01-01
The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)
Energy Technology Data Exchange (ETDEWEB)
Wack, L. J., E-mail: linda-jacqueline.wack@med.uni-tuebingen.de; Thorwarth, D. [Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); Mönnich, D. [Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); German Cancer Consortium (DKTK), Tübingen 72076 (Germany); German Cancer Research Center (DKFZ), Heidelberg 69121 (Germany); Yaromina, A. [OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden 01309, Germany and Department of Radiation Oncology (MAASTRO), GROW—School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht 6229 ET (Netherlands); Zips, D. [German Cancer Consortium (DKTK), Tübingen 72076 (Germany); German Cancer Research Center (DKFZ), Heidelberg 69121 (Germany); Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); and others
2016-07-15
Purpose: To compare a dedicated simulation model for hypoxia PET against tumor microsections stained for different parameters of the tumor microenvironment. The model can readily be adapted to a variety of conditions, such as different human head and neck squamous cell carcinoma (HNSCC) xenograft tumors. Methods: Nine different HNSCC tumor models were transplanted subcutaneously into nude mice. Tumors were excised and immunoflourescently labeled with pimonidazole, Hoechst 33342, and CD31, providing information on hypoxia, perfusion, and vessel distribution, respectively. Hoechst and CD31 images were used to generate maps of perfused blood vessels on which tissue oxygenation and the accumulation of the hypoxia tracer FMISO were mathematically simulated. The model includes a Michaelis–Menten relation to describe the oxygen consumption inside tissue. The maximum oxygen consumption rate M{sub 0} was chosen as the parameter for a tumor-specific optimization as it strongly influences tracer distribution. M{sub 0} was optimized on each tumor slice to reach optimum correlations between FMISO concentration 4 h postinjection and pimonidazole staining intensity. Results: After optimization, high pixel-based correlations up to R{sup 2} = 0.85 were found for individual tissue sections. Experimental pimonidazole images and FMISO simulations showed good visual agreement, confirming the validity of the approach. Median correlations per tumor model varied significantly (p < 0.05), with R{sup 2} ranging from 0.20 to 0.54. The optimum maximum oxygen consumption rate M{sub 0} differed significantly (p < 0.05) between tumor models, ranging from 2.4 to 5.2 mm Hg/s. Conclusions: It is feasible to simulate FMISO distributions that match the pimonidazole retention patterns observed in vivo. Good agreement was obtained for multiple tumor models by optimizing the oxygen consumption rate, M{sub 0}, whose optimum value differed significantly between tumor models.
Flow velocity measurement by using zero-crossing polarity cross correlation method
International Nuclear Information System (INIS)
Xu Chengji; Lu Jinming; Xia Hong
1993-01-01
Using the designed correlation metering system and a high accurate hot-wire anemometer as a calibration device, the experimental study of correlation method in a tunnel was carried out. The velocity measurement of gas flow by using zero-crossing polarity cross correlation method was realized and the experimental results has been analysed
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Energy Technology Data Exchange (ETDEWEB)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.
Simulation methods with extended stability for stiff biochemical Kinetics
Directory of Open Access Journals (Sweden)
Rué Pau
2010-08-01
Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang
2011-03-01
We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.
Computerized simulation methods for dose reduction, in radiodiagnosis
International Nuclear Information System (INIS)
Brochi, M.A.C.
1990-01-01
The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)
Simulation of quantum systems by the tomography Monte Carlo method
International Nuclear Information System (INIS)
Bogdanov, Yu I
2007-01-01
A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated. (fifth seminar in memory of d.n. klyshko)
Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System
Directory of Open Access Journals (Sweden)
Minghong She
2018-01-01
Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.
Vectorization of a particle simulation method for hypersonic rarefied flow
Mcdonald, Jeffrey D.; Baganoff, Donald
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
A mixed finite element method for particle simulation in lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-03-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
A simulation based engineering method to support HAZOP studies
DEFF Research Database (Denmark)
Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge
2012-01-01
the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...
Vectorization of a particle simulation method for hypersonic rarefied flow
International Nuclear Information System (INIS)
Mcdonald, J.D.; Baganoff, D.
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
Kinematics and simulation methods to determine the target thickness
International Nuclear Information System (INIS)
Rosales, P.; Aguilar, E.F.; Martinez Q, E.
2001-01-01
Making use of the kinematics and of the particles energy loss two methods for calculating the thickness of a target are described. Through a computer program and other of simulation in which parameters obtained experimentally are used. Several values for a 12 C target thickness were obtained. It is presented a comparison of the obtained values with each one of the used programs. (Author)
A mixed finite element method for particle simulation in Lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-01-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
Dynamical simulation of heavy ion collisions; VUU and QMD method
International Nuclear Information System (INIS)
Niita, Koji
1992-01-01
We review two simulation methods based on the Vlasov-Uehling-Uhlenbeck (VUU) equation and Quantum Molecular Dynamics (QMD), which are the most widely accepted theoretical framework for the description of intermediate-energy heavy-ion reactions. We show some results of the calculations and compare them with the experimental data. (author)
Simulating water hammer with corrective smoothed particle method
Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.
2012-01-01
The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in
STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD
塩澤, 孝哉
2015-01-01
In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...
Efficient method for transport simulations in quantum cascade lasers
Directory of Open Access Journals (Sweden)
Maczka Mariusz
2017-01-01
Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.
A method of simulating and visualizing nuclear reactions
International Nuclear Information System (INIS)
Atwood, C.H.; Paul, K.M.
1994-01-01
Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
3D spatially-adaptive canonical correlation analysis: Local and global methods.
Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar
2018-04-01
Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.
Wan, Renzhi; Zu, Yunxiao; Shao, Lin
2018-04-01
The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
Multi-level Correlates of Safer Conception Methods Awareness and ...
African Journals Online (AJOL)
Many people living with HIV desire childbearing, but low cost safer conception methods (SCM) such as timed unprotected intercourse (TUI) and manual ... including perceived willingness to use SCM, knowledge of respondent's HIV status, HIV-seropositivity, marriage and equality in decision making within the relationship.
Fast methods for spatially correlated multilevel functional data
Staicu, A.-M.; Crainiceanu, C. M.; Carroll, R. J.
2010-01-01
-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where
Correlates of the Rosenberg Self-Esteem Scale Method Effects
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
International Nuclear Information System (INIS)
Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.
2007-01-01
The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de
Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.
Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd
2018-02-01
There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.
A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.
Li, Ling; Yao, Dezhong
2007-01-01
It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.
Baumgärtel, M.; Ghanem, K.; Kiani, A.; Koch, E.; Pavarini, E.; Sims, H.; Zhang, G.
2017-07-01
We discuss the efficient implementation of general impurity solvers for dynamical mean-field theory. We show that both Lanczos and quantum Monte Carlo in different flavors (Hirsch-Fye, continuous-time hybridization- and interaction-expansion) exhibit excellent scaling on massively parallel supercomputers. We apply these algorithms to simulate realistic model Hamiltonians including the full Coulomb vertex, crystal-field splitting, and spin-orbit interaction. We discuss how to remove the sign problem in the presence of non-diagonal crystal-field and hybridization matrices. We show how to extract the physically observable quantities from imaginary time data, in particular correlation functions and susceptibilities. Finally, we present benchmarks and applications for representative correlated systems.
Correlated volume-energy fluctuations of phospholipid membranes: A simulation study
DEFF Research Database (Denmark)
Pedersen, Ulf. R.; Peters, Günther H.J.; Schröder, Thomas B.
2010-01-01
This paper reports all-atom computer simulations of five phospholipid membranes (DMPC, DPPC, DMPG, DMPS, and DMPSH) with focus on the thermal equilibrium fluctuations of volume, energy, area, thickness, and chain order. At constant temperature and pressure, volume and energy exhibit strong...... membranes, showing a similar picture. The cause of the observed strong correlations is identified by splitting volume and energy into contributions from tails, heads, and water, and showing that the slow volume−energy fluctuations derive from van der Waals interactions of the tail region; they are thus...
Cross-correlation between EMG and center of gravity during quiet stance: theory and simulations.
Kohn, André Fabio
2005-11-01
Several signal processing tools have been employed in the experimental study of the postural control system in humans. Among them, the cross-correlation function has been used to analyze the time relationship between signals such as the electromyogram and the horizontal projection of the center of gravity. The common finding is that the electromyogram precedes the biomechanical signal, a result that has been interpreted in different ways, for example, the existence of feedforward control or the preponderance of a velocity feedback. It is shown here, analytically and by simulation, that the cross-correlation function is dependent in a complicated way on system parameters and on noise spectra. Results similar to those found experimentally, e.g., electromyogram preceding the biomechanical signal may be obtained in a postural control model without any feedforward control and without any velocity feedback. Therefore, correct interpretations of experimentally obtained cross-correlation functions may require additional information about the system. The results extend to other biomedical applications where two signals from a closed loop system are cross-correlated.
Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr
2006-01-01
In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant
Modified network simulation model with token method of bus access
Directory of Open Access Journals (Sweden)
L.V. Stribulevich
2013-08-01
Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz
2018-01-01
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid
2018-05-17
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Simulation of granular and gas-solid flows using discrete element method
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D
Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes
International Nuclear Information System (INIS)
Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P
2007-01-01
Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Directory of Open Access Journals (Sweden)
Erkai Watson
2017-04-01
Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
Clark, Michael D; Morris, Kenneth R; Tomassone, Maria Silvina
2017-09-12
We present a novel simulation-based investigation of the nucleation of nanodroplets from solution and from vapor. Nucleation is difficult to measure or model accurately, and predicting when nucleation should occur remains an open problem. Of specific interest is the "metastable limit", the observed concentration at which nucleation occurs spontaneously, which cannot currently be estimated a priori. To investigate the nucleation process, we employ gauge-cell Monte Carlo simulations to target spontaneous nucleation and measure thermodynamic properties of the system at nucleation. Our results reveal a widespread correlation over 5 orders of magnitude of solubilities, in which the metastable limit depends exclusively on solubility and the number density of generated nuclei. This three-way correlation is independent of other parameters, including intermolecular interactions, temperature, molecular structure, system composition, and the structure of the formed nuclei. Our results have great potential to further the prediction of nucleation events using easily measurable solute properties alone and to open new doors for further investigation.
International Nuclear Information System (INIS)
Schleier, W.; Besold, G.; Heinz, K.
1992-01-01
The authors study the applicability of parallelized/vectorized Monte Carlo (MC) algorithms to the simulation of domain growth in two-dimensional lattice gas models undergoing an ordering process after a rapid quench below an order-disorder transition temperature. As examples they consider models with 2 x 1 and c(2 x 2) equilibrium superstructures on the square and rectangular lattices, respectively. They also study the case of phase separation ('1 x 1' islands) on the square lattice. A generalized parallel checkerboard algorithm for Kawasaki dynamics is shown to give rise to artificial spatial correlations in all three models. However, only if superstructure domains evolve do these correlations modify the kinetics by influencing the nucleation process and result in a reduced growth exponent compared to the value from the conventional heat bath algorithm with random single-site updates. In order to overcome these artificial modifications, two MC algorithms with a reduced degree of parallelism ('hybrid' and 'mask' algorithms, respectively) are presented and applied. As the results indicate, these algorithms are suitable for the simulation of superstructure domain growth on parallel/vector computers. 60 refs., 10 figs., 1 tab
Application of subset simulation methods to dynamic fault tree analysis
International Nuclear Information System (INIS)
Liu Mengyun; Liu Jingquan; She Ding
2015-01-01
Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)
A computer method for simulating the decay of radon daughters
International Nuclear Information System (INIS)
Hartley, B.M.
1988-01-01
The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure
International Nuclear Information System (INIS)
Zhang Songbai; Wu Jun; Zhu Jianyu; Tian Dongfeng; Xie Dong
2011-01-01
The active methodology of time correlation coincidence measurement of neutron is an effective verification means to authenticate uranium metal. A collimated 252 Cf neutron source was used to investigate mass and enrichment of uranium metal through the neutron transport simulation for different enrichments and different masses of uranium metal, then time correlation coincidence counts of them were obtained. By analyzing the characteristic of time correlation coincidence counts, the monotone relationships were founded between FWTH of time correlation coincidence and multiplication factor, between the total coincidence counts in FWTH for time correlation coincidence and mass of 235 U multiplied by multiplication factor, and between the ratio of neutron source penetration and mass of uranium metal. Thus the methodology to authenticate mass and enrichment of uranium metal was established with time correlation coincidence by active neutron investigation. (authors)
Dynamical correlations in finite nuclei: A simple method to study tensor effects
International Nuclear Information System (INIS)
Dellagiacoma, F.; Orlandini, G.; Traini, M.
1983-01-01
Dynamical correlations are introduced in finite nuclei by changing the two-body density through a phenomenological method. The role of tensor and short-range correlations in nuclear momentum distribution, electric form factor and two-body density of 4 He is investigated. The importance of induced tensor correlations in the total photonuclear cross section is reinvestigated providing a successful test of the method proposed here. (orig.)
Riem, N; Boet, S; Bould, M D; Tavares, W; Naik, V N
2012-11-01
Both technical skills (TS) and non-technical skills (NTS) are key to ensuring patient safety in acute care practice and effective crisis management. These skills are often taught and assessed separately. We hypothesized that TS and NTS are not independent of each other, and we aimed to evaluate the relationship between TS and NTS during a simulated intraoperative crisis scenario. This study was a retrospective analysis of performances from a previously published work. After institutional ethics approval, 50 anaesthesiology residents managed a simulated crisis scenario of an intraoperative cardiac arrest secondary to a malignant arrhythmia. We used a modified Delphi approach to design a TS checklist, specific for the management of a malignant arrhythmia requiring defibrillation. All scenarios were recorded. Each performance was analysed by four independent experts. For each performance, two experts independently rated the technical performance using the TS checklist, and two other experts independently rated NTS using the Anaesthetists' Non-Technical Skills score. TS and NTS were significantly correlated to each other (r=0.45, P<0.05). During a simulated 5 min resuscitation requiring crisis resource management, our results indicate that TS and NTS are related to one another. This research provides the basis for future studies evaluating the nature of this relationship, the influence of NTS training on the performance of TS, and to determine whether NTS are generic and transferrable between crises that require different TS.
Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method
Directory of Open Access Journals (Sweden)
Francesco Pellicani
2016-05-01
Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.
A particle finite element method for machining simulations
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Some new results on correlation-preserving factor scores prediction methods
Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.
1999-01-01
Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for
Radon movement simulation in overburden by the 'Scattered Packet Method'
International Nuclear Information System (INIS)
Marah, H.; Sabir, A.; Hlou, L.; Tayebi, M.
1998-01-01
The analysis of Radon ( 222 Rn) movement in overburden needs the resolution of the General Equation of Transport in porous medium, involving diffusion and convection. Generally this equation was derived and solved analytically. The 'Scattered Packed Method' is a recent mathematical method of resolution, initially developed for the electrons movements in the semiconductors studies. In this paper, we have adapted this method to simulate radon emanation in porous medium. The keys parameters are the radon concentration at the source, the diffusion coefficient, and the geometry. To show the efficiency of this method, several cases of increasing complexity are considered. This model allows to follow the migration, in the time and space, of radon produced as a function of the characteristics of the studied site. Forty soil radon measurements were taken from a North Moroccan fault. Forward modeling of the radon anomalies produces satisfactory fits of the observed data and allows the overburden thickness determination. (author)
Evaluation of null-point detection methods on simulation data
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
International Nuclear Information System (INIS)
Chang, J.; Sandler, S.I.
1995-01-01
The correlation functions of homonuclear hard-sphere chain fluids are studied using the Wertheim integral equation theory for associating fluids and the Monte Carlo simulation method. The molecular model used in the simulations is the freely jointed hard-sphere chain with spheres that are tangentially connected. In the Wertheim theory, such a chain molecule is described by sticky hard spheres with two independent attraction sites on the surface of each sphere. The OZ-like equation for this associating fluid is analytically solved using the polymer-PY closure and by imposing a single bonding condition. By equating the mean chain length of this associating hard sphere fluid to the fixed length of the hard-sphere chains used in simulation, we find that the correlation functions for the chain fluids are accurately predicted. From the Wertheim theory we also obtain predictions for the overall correlation functions that include intramolecular correlations. In addition, the results for the average intermolecular correlation functions from the Wertheim theory and from the Chiew theory are compared with simulation results, and the differences between these theories are discussed
Numerical simulation of jet breakup behavior by the lattice Boltzmann method
International Nuclear Information System (INIS)
Matsuo, Eiji; Koyama, Kazuya; Abe, Yutaka; Iwasawa, Yuzuru; Ebihara, Ken-ichi
2015-01-01
In order to understand the jet breakup behavior of the molten core material into coolant during a core disruptive accident (CDA) for a sodium-cooled fast reactor (SFR), we simulated the jet breakup due to the hydrodynamic interaction using the lattice Boltzmann method (LBM). The applicability of the LBM to the jet breakup simulation was validated by comparison with our experimental data. In addition, the influence of several dimensionless numbers such as Weber number and Froude number was examined using the LBM. As a result, we validated applicability of the LBM to the jet breakup simulation, and found that the jet breakup length is independent of Froude number and in good agreement with the Epstein's correlation when the jet interface becomes unstable. (author)
International Nuclear Information System (INIS)
Wegmann, K.; Brix, G.
2000-01-01
Purpose: Single photon transmission (SPT) measurements offer a new approach for the determination of attenuation correction factors (ACF) in PET. It was the aim of the present work, to evaluate a scatter correction alogrithm proposed by C. Watson by means of Monte Carlo simulations. Methods: SPT measurements with a Cs-137 point source were simulated for a whole-body PET scanner (ECAT EXACT HR + ) in both the 2D and 3D mode. To examine the scatter fraction (SF) in the transmission data, the detected photons were classified as unscattered or scattered. The simulated data were used to determine (i) the spatial distribution of the SFs, (ii) an ACF sinogram from all detected events (ACF tot ) and (iii) from the unscattered events only (ACF unscattered ), and (iv) an ACF cor =(ACF tot ) 1+Κ sinogram corrected according to the Watson algorithm. In addition, density images were reconstructed in order to quantitatively evaluate linear attenuation coefficients. Results: A high correlation was found between the SF and the ACF tot sinograms. For the cylinder and the EEC phantom, similar correction factors Κ were estimated. The determined values resulted in an accurate scatter correction in both the 2D and 3D mode. (orig.) [de
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
A fast mollified impulse method for biomolecular atomistic simulations
Energy Technology Data Exchange (ETDEWEB)
Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)
2017-03-15
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.
Quiroga-Lombard, Claudio S; Hass, Joachim; Durstewitz, Daniel
2013-07-01
Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then "slicing" spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Quantum Monte Carlo methods and strongly correlated electrons on honeycomb structures
Energy Technology Data Exchange (ETDEWEB)
Lang, Thomas C.
2010-12-16
In this thesis we apply recently developed, as well as sophisticated quantum Monte Carlo methods to numerically investigate models of strongly correlated electron systems on honeycomb structures. The latter are of particular interest owing to their unique properties when simulating electrons on them, like the relativistic dispersion, strong quantum fluctuations and their resistance against instabilities. This work covers several projects including the advancement of the weak-coupling continuous time quantum Monte Carlo and its application to zero temperature and phonons, quantum phase transitions of valence bond solids in spin-1/2 Heisenberg systems using projector quantum Monte Carlo in the valence bond basis, and the magnetic field induced transition to a canted antiferromagnet of the Hubbard model on the honeycomb lattice. The emphasis lies on two projects investigating the phase diagram of the SU(2) and the SU(N)-symmetric Hubbard model on the hexagonal lattice. At sufficiently low temperatures, condensed-matter systems tend to develop order. An exception are quantum spin-liquids, where fluctuations prevent a transition to an ordered state down to the lowest temperatures. Previously elusive in experimentally relevant microscopic two-dimensional models, we show by means of large-scale quantum Monte Carlo simulations of the SU(2) Hubbard model on the honeycomb lattice, that a quantum spin-liquid emerges between the state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. This unexpected quantum-disordered state is found to be a short-range resonating valence bond liquid, akin to the one proposed for high temperature superconductors. Inspired by the rich phase diagrams of SU(N) models we study the SU(N)-symmetric Hubbard Heisenberg quantum antiferromagnet on the honeycomb lattice to investigate the reliability of 1/N corrections to large-N results by means of numerically exact QMC simulations. We study the melting of phases
Amyloid oligomer structure characterization from simulations: A general method
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)
2014-03-07
Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.
Limitations in simulator time-based human reliability analysis methods
International Nuclear Information System (INIS)
Wreathall, J.
1989-01-01
Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical
Growing correlation length on cooling below the onset of caging in a simulated glass-forming liquid
DEFF Research Database (Denmark)
Lačević, N.; Starr, F. W.; Schrøder, Thomas
2002-01-01
We present a calculation of a fourth-order, time-dependent density correlation function that measures higher-order spatiotemporal correlations of the density of a liquid. From molecular dynamics simulations of a glass-forming Lennard-Jones liquid, we find that the characteristic length scale...... of the dynamics of the liquid in the alpha-relaxation regime....
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Hardware-in-the-loop grid simulator system and method
Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos
2017-05-16
A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.
Simulating condensation on microstructured surfaces using Lattice Boltzmann Method
Alexeev, Alexander; Vasyliv, Yaroslav
2017-11-01
We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.
Simulation of crystalline pattern formation by the MPFC method
Directory of Open Access Journals (Sweden)
Starodumov Ilya
2017-01-01
Full Text Available The Phase Field Crystal model in hyperbolic formulation (modified PFC or MPFC, is investigated as one of the most promising techniques for modeling the formation of crystal patterns. MPFC is a convenient and fundamentally based description linking nano-and meso-scale processes in the evolution of crystal structures. The presented model is a powerful tool for mathematical modeling of the various operations in manufacturing. Among them is the definition of process conditions for the production of metal castings with predetermined properties, the prediction of defects in the crystal structure during casting, the evaluation of quality of special coatings, and others. Our paper presents the structure diagram which was calculated for the one-mode MPFC model and compared to the results of numerical simulation for the fast phase transitions. The diagram is verified by the numerical simulation and also strongly correlates to the previously calculated diagrams. The computations have been performed using software based on the effective parallel computational algorithm.
'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.
2008-01-01
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation
International Nuclear Information System (INIS)
Beekman, F.J.
1999-01-01
Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)
Science classroom inquiry (SCI simulations: a novel method to scaffold science learning.
Directory of Open Access Journals (Sweden)
Melanie E Peffer
Full Text Available Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.
Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.
Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda
2015-01-01
Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.
Simulation of Rossi-α method with analog Monte-Carlo method
International Nuclear Information System (INIS)
Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang
2012-01-01
The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)
Reduction Methods for Real-time Simulations in Hybrid Testing
DEFF Research Database (Denmark)
Andersen, Sebastian
2016-01-01
Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...
On the efficient simulation of the left-tail of the sum of correlated log-normal variates
Alouini, Mohamed-Slim
2018-04-04
The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures
International Nuclear Information System (INIS)
Mejia-Barbosa, Y.
2000-03-01
We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)
The perturbed angular correlation method - a modern technique in studying solids
International Nuclear Information System (INIS)
Unterricker, S.; Hunger, H.J.
1979-01-01
Starting from theoretical fundamentals the differential perturbed angular correlation method has been explained. By using the probe nucleus 111 Cd the magnetic dipole interaction in Fesub(x)Alsub(1-x) alloys and the electric quadrupole interaction in Cd have been measured. The perturbed angular correlation method is a modern nuclear measuring method and can be applied in studying ordering processes, phase transformations and radiation damages in metals, semiconductors and insulators
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Discrete vortex method simulations of aerodynamic admittance in bridge aerodynamics
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan
, and to determine aerodynamic forces and the corresponding ﬂutter limit. A simulation of the three-dimensional bridge responseto turbulent wind is carried out by quasi steady theory by modelling the bridge girder as a line like structure [2], applying the aerodynamic load coefﬁcients found from the current version......The meshless and remeshed Discrete Vortex Method (DVM) has been widely used in academia and by the industry to model two-dimensional ﬂow around bluff bodies. The implementation “DVMFLOW” [1] is used by the bridge design company COWI to determine and visualise the ﬂow ﬁeld around bridge sections...
Numerical Simulation of Plasma Antenna with FDTD Method
International Nuclear Information System (INIS)
Chao, Liang; Yue-Min, Xu; Zhi-Jiang, Wang
2008-01-01
We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconBgurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design
Numerical simulation of plasma antenna with FDTD method
International Nuclear Information System (INIS)
Liang Chao; Xu Yuemin; Wang Zhijiang
2008-01-01
We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconfigurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design. (authors)
Three-dimensional discrete element method simulation of core disking
Wu, Shunchuan; Wu, Haoyan; Kemeny, John
2018-04-01
The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Masubuchi, Yuichi; Pandey, Ankita; Amamoto, Yoshifumi; Uneyama, Takashi
2017-11-01
Although it has not been frequently discussed, contributions of the orientational cross-correlation (OCC) between entangled polymers are not negligible in the relaxation modulus. In the present study, OCC contributions were investigated for 4- and 6-arm star-branched and H-branched polymers by means of multi-chain slip-link simulations. Owing to the molecular-level description of the simulation, the segment orientation was traced separately for each molecule as well as each subchain composing the molecules. Then, the OCC was calculated between different molecules and different subchains. The results revealed that the amount of OCC between different molecules is virtually identical to that of linear polymers regardless of the branching structure. The OCC between constituent subchains of the same molecule is significantly smaller than the OCC between different molecules, although its intensity and time-dependent behavior depend on the branching structure as well as the molecular weight. These results lend support to the single-chain models given that the OCC effects are embedded into the stress-optical coefficient, which is independent of the branching structure.
Creation and Delphi-method refinement of pediatric disaster triage simulations.
Cicero, Mark X; Brown, Linda; Overly, Frank; Yarzebski, Jorge; Meckler, Garth; Fuchs, Susan; Tomassoni, Anthony; Aghababian, Richard; Chung, Sarita; Garrett, Andrew; Fagbuyi, Daniel; Adelgais, Kathleen; Goldman, Ran; Parker, James; Auerbach, Marc; Riera, Antonio; Cone, David; Baum, Carl R
2014-01-01
There is a need for rigorously designed pediatric disaster triage (PDT) training simulations for paramedics. First, we sought to design three multiple patient incidents for EMS provider training simulations. Our second objective was to determine the appropriate interventions and triage level for each victim in each of the simulations and develop evaluation instruments for each simulation. The final objective was to ensure that each simulation and evaluation tool was free of bias toward any specific PDT strategy. We created mixed-methods disaster simulation scenarios with pediatric victims: a school shooting, a school bus crash, and a multiple-victim house fire. Standardized patients, high-fidelity manikins, and low-fidelity manikins were used to portray the victims. Each simulation had similar acuity of injuries and 10 victims. Examples include children with special health-care needs, gunshot wounds, and smoke inhalation. Checklist-based evaluation tools and behaviorally anchored global assessments of function were created for each simulation. Eight physicians and paramedics from areas with differing PDT strategies were recruited as Subject Matter Experts (SMEs) for a modified Delphi iterative critique of the simulations and evaluation tools. The modified Delphi was managed with an online survey tool. The SMEs provided an expected triage category for each patient. The target for modified Delphi consensus was ≥85%. Using Likert scales and free text, the SMEs assessed the validity of the simulations, including instances of bias toward a specific PDT strategy, clarity of learning objectives, and the correlation of the evaluation tools to the learning objectives and scenarios. After two rounds of the modified Delphi, consensus for expected triage level was >85% for 28 of 30 victims, with the remaining two achieving >85% consensus after three Delphi iterations. To achieve consensus, we amended 11 instances of bias toward a specific PDT strategy and corrected 10
Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids
Energy Technology Data Exchange (ETDEWEB)
Chang, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morales, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-11-10
We propose a method of implementing projected wave functions for second-quantized auxiliary-field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. As such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.
Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids
International Nuclear Information System (INIS)
Chang, C.; Morales, M. A.
2016-01-01
We propose a method of implementing projected wave functions for second-quantized auxiliary-field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. As such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.
Corso, Ruggero M; Cattano, Davide; Buccioli, Matteo; Carretta, Elisa; Maitan, Stefano
2016-01-01
Difficult airway (DA) occurs frequently (5-15%) in clinical practice. The El-Ganzouri Risk Index (EGRI) has a high sensitivity for predicting a difficult intubation (DI). However difficult mask ventilation (DMV) was never included in the EGRI. Since DMV was not included in the EGRI assessment, and obstructive sleep apnea (OSA) is also correlated with DMV, a study correlating the prediction of DA and OSA (identified by STOP-Bang questionnaire, SB) seemed important. We accessed a database previously collected for a post analysis simulation of the airway difficulty predictivity of the EGRI, associated with normal and difficult airway, particularly DMV. As secondary aim, we measured the correlation between the SB prediction system and DA, compared to the EGRI. A total of 2747 patients were included in the study. The proportion of patients with DI was 14.7% (95% CI 13.4-16) and the proportion of patients with DMV was 3.42% (95% CI 2.7-4.1). The incidence of DMV combined with DI was (2.3%). The optimal cutoff value of EGRI was 3. EGRI registered also an higher ability to predict DMV (AUC=0.76 (95% CI 0.71-0.81)). Adding the SB variables in the logistic model, the AUC increases with the inclusion of "observed apnea" variable (0.83 vs. 0.81, p=0.03). The area under the ROC curve for the patients with DI and DMV was 0.77 (95% CI 0.72-0.83). This study confirms that the incidence of DA is not negligible and suggests the use of the EGRI as simple bedside predictive score to improve patient safety. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.
Corso, Ruggero M; Cattano, Davide; Buccioli, Matteo; Carretta, Elisa; Maitan, Stefano
2016-01-01
Difficult airway (DA) occurs frequently (5-15%) in clinical practice. The El-Ganzouri Risk Index (EGRI) has a high sensitivity for predicting a difficult intubation (DI). However difficult mask ventilation (DMV) was never included in the EGRI. Since DMV was not included in the EGRI assessment, and obstructive sleep apnea (OSA) is also correlated with DMV, a study correlating the prediction of DA and OSA (identified by STOP-Bang questionnaire, SB) seemed important. We accessed a database previously collected for a post analysis simulation of the airway difficulty predictivity of the EGRI, associated with normal and difficult airway, particularly DMV. As secondary aim, we measured the correlation between the SB prediction system and DA, compared to the EGRI. A total of 2747 patients were included in the study. The proportion of patients with DI was 14.7% (95% CI 13.4-16) and the proportion of patients with DMV was 3.42% (95% CI 2.7-4.1). The incidence of DMV combined with DI was (2.3%). The optimal cutoff value of EGRI was 3. EGRI registered also an higher ability to predict DMV (AUC=0.76 (95% CI 0.71-0.81)). Adding the SB variables in the logistic model, the AUC increases with the inclusion of "observed apnea" variable (0.83 vs. 0.81, p=0.03). The area under the ROC curve for the patients with DI and DMV was 0.77 (95% CI 0.72-0.83). This study confirms that the incidence of DA is not negligible and suggests the use of the EGRI as simple bedside predictive score to improve patient safety. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
Simulation of ecological processes using response functions method
International Nuclear Information System (INIS)
Malkina-Pykh, I.G.; Pykh, Yu. A.
1998-01-01
The article describes further development and applications of the already well-known response functions method (MRF). The method is used as a basis for the development of mathematical models of a wide set of ecological processes. The model of radioactive contamination of the ecosystems is chosen as an example. The mathematical model was elaborated for the description of 90 Sr dynamics in the elementary ecosystems of various geographical zones. The model includes the blocks corresponding with the main units of any elementary ecosystem: lower atmosphere, soil, vegetation, surface water. Parameters' evaluation was provided on a wide set of experimental data. A set of computer simulations was done on the model to prove the possibility of the model's use for ecological forecasting
Simulation of bubble motion under gravity by lattice Boltzmann method
International Nuclear Information System (INIS)
Takada, Naoki; Misawa, Masaki; Tomiyama, Akio; Hosokawa, Shigeo
2001-01-01
We describe the numerical simulation results of bubble motion under gravity by the lattice Boltzmann method (LBM), which assumes that a fluid consists of mesoscopic fluid particles repeating collision and translation and a multiphase interface is reproduced in a self-organizing way by repulsive interaction between different kinds of particles. The purposes in this study are to examine the applicability of LBM to the numerical analysis of bubble motions, and to develop a three-dimensional version of the binary fluid model that introduces a free energy function. We included the buoyancy terms due to the density difference in the lattice Boltzmann equations, and simulated single-and two-bubble motions, setting flow conditions according to the Eoetvoes and Morton numbers. The two-dimensional results by LBM agree with those by the Volume of Fluid method based on the Navier-Stokes equations. The three-dimensional model possesses the surface tension satisfying the Laplace's law, and reproduces the motion of single bubble and the two-bubble interaction of their approach and coalescence in circular tube. There results prove that the buoyancy terms and the 3D model proposed here are suitable, and that LBM is useful for the numerical analysis of bubble motion under gravity. (author)
Energy Technology Data Exchange (ETDEWEB)
Karp, Jerome M.; Erylimaz, Ertan; Cowburn, David, E-mail: cowburn@cowburnlab.org, E-mail: David.cowburn@einstein.yu.edu [Albert Einstein College of Medicine of Yeshiva University, Department of Biochemistry (United States)
2015-01-15
There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
International Nuclear Information System (INIS)
Yoneda, Kazuhiro; Tonouchi, Shigemasa
1992-01-01
When the survey of the state of natural radiation distribution was carried out, for the purpose of examining the useful measuring method, the comparison of the γ-ray dose rate calculated from survey meter method, in-situ measuring method and the measuring method by sampling soil was carried out. Between the in-situ measuring method and the survey meter method, the correlation Y=0.986X+5.73, r=0.903, n=18, P<0.01 was obtained, and the high correlation having the inclination of nearly 1 was shown. Between the survey meter method and the measuring method by sampling soil, the correlation Y=1.297X-10.30, r=0.966, n=20 P<0.01 was obtained, and the high correlation was shown, but as for the dose rate contribution, the disparities of 36% in U series, 6% in Th series and 20% in K-40 were observed. For the survey of the state of natural radiation distribution, the method of using in combination the survey meter method and the in-situ measuring method or the measuring method by sampling soil is suitable. (author)
Energy Technology Data Exchange (ETDEWEB)
Buta, A [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; [Institute of Atomic Physics, Bucharest (Romania); Angelique, J C; Bizard, G; Brou, R; Cussol, D [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Auger, G; Cabot, C [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Cassagnou, Y [CEA Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d` Astrophysique, de la Physique des Particules, de la Physique Nucleaire et de l` Instrumentation Associee; Crema, E [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; El Masri, Y [Louvain Univ., Louvain-la-Neuve (Belgium). Unite de Physique Nucleaire; others, and
1996-09-01
Measuring the in-plane flow parameter appears to be a promising method to gain information on the equation of state of nuclear matter. A new method, based on particle-particle azimuthal correlations is proposed. This method does not require the knowledge of the reaction plane. The collisions Zn+Ni and Ar+Al are presented as an example. (K.A.).
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
Directory of Open Access Journals (Sweden)
Jingjing He
2017-09-01
Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.
A simulation training evaluation method for distribution network fault based on radar chart
Directory of Open Access Journals (Sweden)
Yuhang Xu
2018-01-01
Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.
The Moulded Site Data (MSD) wind correlation method: description and assessment
Energy Technology Data Exchange (ETDEWEB)
King, C.; Hurley, B.
2004-12-01
The long-term wind resource at a potential windfarm site may be estimated by correlating short-term on-site wind measurements with data from a regional meteorological station. A correlation method developed at Airtricity is described in sufficient detail to be reproduced. An assessment of its performance is also described; the results may serve as a guide to expected accuracy when using the method as part of an annual electricity production estimate for a proposed windfarm. (Author)
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica
2016-10-01
This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Sawada, Akira; Yoda, Kiyoshi; Numano, Masumi; Futami, Yasuyuki; Yamashita, Haruo; Murayama, Shigeyuki; Tsugami, Hironobu
2005-01-01
A new technique based on normalized binary image correlation between two edge images has been proposed for positioning proton-beam radiotherapy patients. A Canny edge detector was used to extract two edge images from a reference x-ray image and a test x-ray image of a patient before positioning. While translating and rotating the edged test image, the absolute value of the normalized binary image correlation between the two edge images is iteratively maximized. Each time before rotation, dilation is applied to the edged test image to avoid a steep reduction of the image correlation. To evaluate robustness of the proposed method, a simulation has been carried out using 240 simulated edged head front-view images extracted from a reference image by varying parameters of the Canny algorithm with a given range of rotation angles and translation amounts in x and y directions. It was shown that resulting registration errors have an accuracy of one pixel in x and y directions and zero degrees in rotation, even when the number of edge pixels significantly differs between the edged reference image and the edged simulation image. Subsequently, positioning experiments using several sets of head, lung, and hip data have been performed. We have observed that the differences of translation and rotation between manual positioning and the proposed method were within one pixel in translation and one degree in rotation. From the results of the validation study, it can be concluded that a significant reduction in workload for the physicians and technicians can be achieved with this method
Experimental study on reactivity measurement in thermal reactor by polarity correlation method
International Nuclear Information System (INIS)
Yasuda, Hideshi
1977-11-01
Experimental study on the polarity correlation method for measuring the reactivity of a thermal reactor, especially the one possessing long prompt neutron lifetime such as graphite on heavy water moderated core, is reported. The techniques of reactor kinetics experiment are briefly reviewed, which are classified in two groups, one characterized by artificial disturbance to a reactor and the other by natural fluctuation inherent in a reactor. The fluctuation phenomena of neutron count rate are explained using F. de Hoffman's stochastic method, and correlation functions for the neutron count rate fluctuation are shown. The experimental results by polarity correlation method applied to the β/l measurements in both graphite-moderated SHE core and light water-moderated JMTRC and JRR-4 cores, and also to the measurement of SHE shut down reactivity margin are presented. The measured values were in good agreement with those by a pulsed neutron method in the reactivity range from critical to -12 dollars. The conditional polarity correlation experiments in SHE at -20 cent and -100 cent are demonstrated. The prompt neutron decay constants agreed with those obtained by the polarity correlation experiments. The results of experiments measuring large negative reactivity of -52 dollars of SHE by pulsed neutron, rod drop and source multiplication methods are given. Also it is concluded that the polarity and conditional polarity correlation methods are sufficiently applicable to noise analysis of a low power thermal reactor with long prompt neutron lifetime. (Nakai, Y.)
Wimmers, Paul F; Fung, Cha-Chi
2008-06-01
The finding of case or content specificity in medical problem solving moved the focus of research away from generalisable skills towards the importance of content knowledge. However, controversy about the content dependency of clinical performance and the generalisability of skills remains. This study aimed to explore the relative impact of both perspectives (case specificity and generalisable skills) on different components (history taking, physical examination, communication) of clinical performance within and across cases. Data from a clinical performance examination (CPX) taken by 350 Year 3 students were used in a correlated traits-correlated methods (CTCM) approach using confirmatory factor analysis, whereby 'traits' refers to generalisable skills and 'methods' to individual cases. The baseline CTCM model was analysed and compared with four nested models using structural equation modelling techniques. The CPX consisted of three skills components and five cases. Comparison of the four different models with the least-restricted baseline CTCM model revealed that a model with uncorrelated generalisable skills factors and correlated case-specific knowledge factors represented the data best. The generalisable processes found in history taking, physical examination and communication were responsible for half the explained variance, in comparison with the variance related to case specificity. Conclusions Pure knowledge-based and pure skill-based perspectives on clinical performance both seem too one-dimensional and new evidence supports the idea that a substantial amount of variance contributes to both aspects of performance. It could be concluded that generalisable skills and specialised knowledge go hand in hand: both are essential aspects of clinical performance.
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation
DEFF Research Database (Denmark)
Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær
2017-01-01
surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple......Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...
A general method dealing with correlations in uncertainty propagation in fault trees
International Nuclear Information System (INIS)
Qin Zhang
1989-01-01
This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
International Nuclear Information System (INIS)
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-01-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient
A calculation method for RF couplers design based on numerical simulation by microwave studio
International Nuclear Information System (INIS)
Wang Rong; Pei Yuanji; Jin Kai
2006-01-01
A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)
Kishor Kumar, V. V.; Kuzhiveli, B. T.
2017-12-01
The performance of a Stirling cryocooler depends on the thermal and hydrodynamic properties of the regenerator in the system. CFD modelling is the best technique to design and predict the performance of a Stirling cooler. The accuracy of the simulation results depend on the hydrodynamic and thermal transport parameters used as the closure relations for the volume averaged governing equations. A methodology has been developed to quantify the viscous and inertial resistance terms required for modelling the regenerator as a porous medium in Fluent. Using these terms, the steady and steady - periodic flow of helium through regenerator was modelled and simulated. Comparison of the predicted and experimental pressure drop reveals the good predictive power of the correlation based method. For oscillatory flow, the simulation could predict the exit pressure amplitude and the phase difference accurately. Therefore the method was extended to obtain the Darcy permeability and Forchheimer’s inertial coefficient of other wire mesh matrices applicable to Stirling coolers. Simulation of regenerator using these parameters will help to better understand the thermal and hydrodynamic interactions between working fluid and the regenerator material, and pave the way to contrive high performance, ultra-compact free displacers used in miniature Stirling cryocoolers in the future.
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Adaptive and dynamic meshing methods for numerical simulations
Acikgoz, Nazmiye
-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations
Status of the Correlation Process of the V-HAB Simulation with Ground Tests and ISS Telemetry Data
Ploetner, P.; Roth, C.; Zhukov, A.; Czupalla, M.; Anderson, M.; Ewert, M.
2013-01-01
The Virtual Habitat (V-HAB) is a dynamic Life Support System (LSS) simulation, created for investigation of future human spaceflight missions. It provides the capability to optimize LSS during early design phases. The focal point of the paper is the correlation and validation of V-HAB against ground test and flight data. In order to utilize V-HAB to design an Environmental Control and Life Support System (ECLSS) it is important to know the accuracy of simulations, strengths and weaknesses. Therefore, simulations of real systems are essential. The modeling of the International Space Station (ISS) ECLSS in terms of single technologies as well as an integrated system and correlation against ground and flight test data is described. The results of the simulations make it possible to prove the approach taken by V-HAB.
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
The Multiscale Material Point Method for Simulating Transient Responses
Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas
2015-06-01
To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.
Numerical Simulation of Antennas with Improved Integral Equation Method
International Nuclear Information System (INIS)
Ma Ji; Fang Guang-You; Lu Wei
2015-01-01
Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)
Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method
Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong
2018-06-01
In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.
Study of Flapping Flight Using Discrete Vortex Method Based Simulations
Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.
2013-12-01
In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.
Simulation of galvanic corrosion using boundary element method
International Nuclear Information System (INIS)
Zaifol Samsu; Muhamad Daud; Siti Radiah Mohd Kamaruddin; Nur Ubaidah Saidin; Abdul Aziz Mohamed; Mohd Saari Ripin; Rusni Rejab; Mohd Shariff Sattar
2011-01-01
Boundary element method (BEM) is a numerical technique that used for modeling infinite domain as is the case for galvanic corrosion analysis. The use of boundary element analysis system (BEASY) has allowed cathodic protection (CP) interference to be assessed in terms of the normal current density, which is directly proportional to the corrosion rate. This paper was present the analysis of the galvanic corrosion between Aluminium and Carbon Steel in natural sea water. The result of experimental was validated with computer simulation like BEASY program. Finally, it can conclude that the BEASY software is a very helpful tool for future planning before installing any structure, where it gives the possible CP interference on any nearby unprotected metallic structure. (Author)
An experiment teaching method based on the Optisystem simulation platform
Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan
2017-08-01
The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.
A Finite Element Method for Simulation of Compressible Cavitating Flows
Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad
2016-11-01
This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.
Precision of a FDTD method to simulate cold magnetized plasmas
International Nuclear Information System (INIS)
Pavlenko, I.V.; Melnyk, D.A.; Prokaieva, A.O.; Girka, I.O.
2014-01-01
The finite difference time domain (FDTD) method is applied to describe the propagation of the transverse electromagnetic waves through the magnetized plasmas. The numerical dispersion relation is obtained in a cold plasma approximation. The accuracy of the numerical dispersion is calculated as a function of the frequency of the launched wave and time step of the numerical grid. It is shown that the numerical method does not reproduce the analytical results near the plasma resonances for any chosen value of time step if there is not a dissipation mechanism in the system. It means that FDTD method cannot be applied straightforward to simulate the problems where the plasma resonances play a key role (for example, the mode conversion problems). But the accuracy of the numerical scheme can be improved by introducing some artificial damping of the plasma currents. Although part of the wave power is lost in the system in this case but the numerical scheme describes the wave processes in an agreement with analytical predictions.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Wenguang, E-mail: zhwg@sjtu.edu.cn; Ma, Yakun; Li, Zhengwei [State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240 (China)
2016-01-15
Purpose: The application of neural probes in clinic has been challenged by probes’ short lifetime when implanted into brain tissue. The primary goal is to develop an evaluation system for testing brain tissue injury induced by neural probe’s insertion using microscope based digital image correlation method. Methods: A brain tissue phantom made of silicone rubber with speckle pattern on its surface was fabricated. To obtain the optimal speckle pattern, mean intensity gradient parameter was used for quality assessment. The designed testing system consists of three modules: (a) load module for simulating neural electrode implantation process; (b) data acquisition module to capture micrographs of speckle pattern and to obtain reactive forces during the insertion of the probe; (c) postprocessing module for extracting tissue deformation information from the captured speckle patterns. On the basis of the evaluation system, the effects of probe wedge angle, insertion speed, and probe streamline on insertion induced tissue injury were investigated. Results: The optimal quality speckle pattern can be attained by the following fabrication parameters: spin coating rate—1000 r/min, silicone rubber component A: silicone rubber component B: softener: graphite = 5 ml: 5 ml: 2 ml: 0.6 g. The probe wedge angle has a significant effect on tissue injury. Compared to wedge angle 40° and 20°, maximum principal strain of 60° wedge angle was increased by 40.3% and 87.5%, respectively; compared with a relatively higher speed (500 μm/s), the maximum principle strain within the tissue induced by slow insertion speed (100 μm/s) was increased by 14.3%; insertion force required by probe with convex streamline was smaller than the force of traditional probe. Based on the experimental results, a novel neural probe that has a rounded tip covered by a biodegradable silk protein coating with convex streamline was proposed, which has both lower insertion and micromotion induced tissue
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges
Pasipanodya, Jotam; Gumbo, Tawanda
2011-01-01
Antimicrobial pharmacokinetic-pharmacodynamic (PK/PD) science and clinical trial simulations have not been adequately applied to the design of doses and dose schedules of antituberculosis regimens because many researchers are skeptical about their clinical applicability. We compared findings of preclinical PK/PD studies of current first-line antituberculosis drugs to findings from several clinical publications that included microbiologic outcome and pharmacokinetic data or had a dose-scheduling design. Without exception, the antimicrobial PK/PD parameters linked to optimal effect were similar in preclinical models and in tuberculosis patients. Thus, exposure-effect relationships derived in the preclinical models can be used in the design of optimal antituberculosis doses, by incorporating population pharmacokinetics of the drugs and MIC distributions in Monte Carlo simulations. When this has been performed, doses and dose schedules of rifampin, isoniazid, pyrazinamide, and moxifloxacin with the potential to shorten antituberculosis therapy have been identified. In addition, different susceptibility breakpoints than those in current use have been identified. These steps outline a more rational approach than that of current methods for designing regimens and predicting outcome so that both new and older antituberculosis agents can shorten therapy duration.
International Nuclear Information System (INIS)
Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E
2014-01-01
In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit
International Nuclear Information System (INIS)
Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing
2017-01-01
Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of
Libraries for spectrum identification: Method of normalized coordinates versus linear correlation
International Nuclear Information System (INIS)
Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.
2008-01-01
In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification
Xu, Lianyun; Hou, Zhende; Qin, Yuwen
2002-05-01
Because some composite material, thin film material, and biomaterial, are very thin and some of them are flexible, the classical methods for measuring their Young's moduli, by mounting extensometers on specimens, are not available. A bi-image method based on image correlation for measuring Young's moduli is developed in this paper. The measuring precision achieved is one order enhanced with general digital image correlation or called single image method. By this way, the Young's modulus of a SS301 stainless steel thin tape, with thickness 0.067mm, is measured, and the moduli of polyester fiber films, a kind of flexible sheet with thickness 0.25 mm, are also measured.
A cross-correlation method to search for gravitational wave bursts with AURIGA and Virgo
Bignotto, M.; Bonaldi, M.; Camarda, M.; Cerdonio, M.; Conti, L.; Drago, M.; Falferi, P.; Liguori, N.; Longo, S.; Mezzena, R.; Mion, A.; Ortolan, A.; Prodi, G. A.; Re, V.; Salemi, F.; Taffarello, L.; Vedovato, G.; Vinante, A.; Vitale, S.; Zendri, J. -P.; Acernese, F.; Alshourbagy, Mohamed; Amico, Paolo; Antonucci, Federica; Aoudia, S.; Astone, P.; Avino, Saverio; Baggio, L.; Ballardin, G.; Barone, F.; Barsotti, L.; Barsuglia, M.; Bauer, Th. S.; Bigotta, Stefano; Birindelli, Simona; Boccara, Albert-Claude; Bondu, F.; Bosi, Leone; Braccini, Stefano; Bradaschia, C.; Brillet, A.; Brisson, V.; Buskulic, D.; Cagnoli, G.; Calloni, E.; Campagna, Enrico; Carbognani, F.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cesarini, E.; Chassande-Mottin, E.; Clapson, A-C; Cleva, F.; Coccia, E.; Corda, C.; Corsi, A.; Cottone, F.; Coulon, J. -P.; Cuoco, E.; D'Antonio, S.; Dari, A.; Dattilo, V.; Davier, M.; Rosa, R.; Del Prete, M.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Evans, M.; Fafone, V.; Ferrante, I.; Fidecaro, F.; Fiori, I.; Flaminio, R.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Gammaitoni, L.; Garufi, F.; Genin, E.; Gennai, A.; Giazotto, A.; Giordano, L.; Granata, V.; Greverie, C.; Grosjean, D.; Guidi, G.; Hamdani, S.U.; Hebri, S.; Heitmann, H.; Hello, P.; Huet, D.; Kreckelbergh, S.; La Penna, P.; Laval, M.; Leroy, N.; Letendre, N.; Lopez, B.; Lorenzini, M.; Loriette, V.; Losurdo, G.; Mackowski, J. -M.; Majorana, E.; Man, C. N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marque, J.; Martelli, F.; Masserot, A.; Menzinger, F.; Milano, L.; Minenkov, Y.; Moins, C.; Moreau, J.; Morgado, N.; Mosca, S.; Mours, B.; Neri, I.; Nocera, F.; Pagliaroli, G.; Palomba, C.; Paoletti, F.; Pardi, S.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Piergiovanni, F.; Pinard, L.; Poggiani, R.; Punturo, M.; Puppo, P.; Rapagnani, P.; Regimbau, T.; Remillieux, A.; Ricci, F.; Ricciardi, I.; Rocchi, A.; Rolland, L.; Romano, R.; Ruggi, P.; Russo, G.; Solimeno, S.; Spallicci, A.; Swinkels, B. L.; Tarallo, M.; Terenzi, R.; Toncelli, A.; Tonelli, M.; Tournefier, E.; Travasso, F.; Vajente, G.; van den Brand, J. F. J.; van der Putten, S.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinet, J. -Y.; Vocca, H.; Yvert, M.
2008-01-01
We present a method to search for transient gravitational waves using a network of detectors with different spectral and directional sensitivities: the interferometer Virgo and the bar detector AURIGA. The data analysis method is based on the measurements of the correlated energy in the network by
The systematic error of temperature noise correlation measurement method and self-calibration
International Nuclear Information System (INIS)
Tian Hong; Tong Yunxian
1993-04-01
The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Edwards, Jonathan; Lallier, Florent; Caumon, Guillaume; Carpentier, Cédric
2018-02-01
We discuss the sampling and the volumetric impact of stratigraphic correlation uncertainties in basins and reservoirs. From an input set of wells, we evaluate the probability for two stratigraphic units to be associated using an analog stratigraphic model. In the presence of multiple wells, this method sequentially updates a stratigraphic column defining the stratigraphic layering for each possible set of realizations. The resulting correlations are then used to create stratigraphic grids in three dimensions. We apply this method on a set of synthetic wells sampling a forward stratigraphic model built with Dionisos. To perform cross-validation of the method, we introduce a distance comparing the relative geological time of two models for each geographic position, and we compare the models in terms of volumes. Results show the ability of the method to automatically generate stratigraphic correlation scenarios, and also highlight some challenges when sampling stratigraphic uncertainties from multiple wells.
Turbulent flow and temperature noise simulation by a multiparticle Monte Carlo method
International Nuclear Information System (INIS)
Hughes, G.; Overton, R.S.
1980-10-01
A statistical method of simulating real-time temperature fluctuations in liquid sodium pipe flow, for potential application to the estimation of temperature signals generated by subassembly blockages in LMFBRs is described. The method is based on the empirical characterisation of the flow by turbulence intensity and macroscale, radial velocity correlations and spectral form. These are used to produce realisations of the correlated motion of successive batches of representative 'marker particles' released at discrete time intervals into the flow. Temperature noise is generated by the radial mixing of the particles as they move downstream from an assumed mean temperature profile, where they acquire defined temperatures. By employing multi-particle batches, it is possible to perform radial heat transfer calculations, resulting in axial dissipation of the temperature noise levels. A simulated temperature-time signal is built up by recording the temperature at a given point in the flow as each batch of particles reaches the radial measurement plane. This is an advantage over conventional techniques which can usually only predict time-averaged parameters. (U.K.)
Simulation and Verificaiton of Flow in Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica
2005-01-01
Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...
An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers
Chen, S.; Hanzo, L.
2000-01-01
An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.
Numerical simulation for cracks detection using the finite elements method
Directory of Open Access Journals (Sweden)
S Bennoud
2016-09-01
Full Text Available The means of detection must ensure controls either during initial construction, or at the time of exploitation of all parts. The Non destructive testing (NDT gathers the most widespread methods for detecting defects of a part or review the integrity of a structure. In the areas of advanced industry (aeronautics, aerospace, nuclear …, assessing the damage of materials is a key point to control durability and reliability of parts and materials in service. In this context, it is necessary to quantify the damage and identify the different mechanisms responsible for the progress of this damage. It is therefore essential to characterize materials and identify the most sensitive indicators attached to damage to prevent their destruction and use them optimally. In this work, simulation by finite elements method is realized with aim to calculate the electromagnetic energy of interaction: probe and piece (with/without defect. From calculated energy, we deduce the real and imaginary components of the impedance which enables to determine the characteristic parameters of a crack in various metallic parts.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
Energy Technology Data Exchange (ETDEWEB)
Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.
2018-03-28
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.
Valassi, A
2014-01-01
We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that the correlations provided as input to BLUE combinations need to be assessed with extreme ca...
Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.
Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh
2016-06-01
Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.
An improved correlated sampling method for calculating correction factor of detector
International Nuclear Information System (INIS)
Wu Zhen; Li Junli; Cheng Jianping
2006-01-01
In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)
From plane waves to local Gaussians for the simulation of correlated periodic systems
International Nuclear Information System (INIS)
Booth, George H.; Tsatsoulis, Theodoros; Grüneis, Andreas; Chan, Garnet Kin-Lic
2016-01-01
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of the basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.
From plane waves to local Gaussians for the simulation of correlated periodic systems
Energy Technology Data Exchange (ETDEWEB)
Booth, George H., E-mail: george.booth@kcl.ac.uk [Department of Physics, King’s College London, Strand, London WC2R 2LS (United Kingdom); Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de [Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart (Germany); Chan, Garnet Kin-Lic [Frick Laboratory, Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)
2016-08-28
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of the basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.
Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li
2016-05-15
Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Mohamed Mehana
2016-06-01
Full Text Available The development of shale reservoirs has brought a paradigm shift in the worldwide energy equation. This entails developing robust techniques to properly evaluate and unlock the potential of those reservoirs. The application of Nuclear Magnetic Resonance techniques in fluid typing and properties estimation is well-developed in conventional reservoirs. However, Shale reservoirs characteristics like pore size, organic matter, clay content, wettability, adsorption, and mineralogy would limit the applicability of the used interpretation methods and correlation. Some of these limitations include the inapplicability of the controlling equations that were derived assuming fast relaxation regime, the overlap of different fluids peaks and the lack of robust correlation to estimate fluid properties in shale. This study presents a state-of-the-art review of the main contributions presented on fluid typing methods and correlations in both experimental and theoretical side. The study involves Dual Tw, Dual Te, and doping agent's application, T1-T2, D-T2 and T2sec vs. T1/T2 methods. In addition, fluid properties estimation such as density, viscosity and the gas-oil ratio is discussed. This study investigates the applicability of these methods along with a study of the current fluid properties correlations and their limitations. Moreover, it recommends the appropriate method and correlation which are capable of tackling shale heterogeneity.
Method of correlation operators in the theory of a system of particles with strong interactions
International Nuclear Information System (INIS)
Kuz'min, Y.M.
1985-01-01
A similarity transformation of the density matrix is performed with the help of the correlation operator. This does not change the value of the partition function. A method of calculating the transformed partition function with the help of a finite translation operator is given. A general system of coupled equations is obtained from which the matrix elements of correlation operators of increasing order can be found
Application of the spectral-correlation method for diagnostics of cellulose paper
Kiesewetter, D.; Malyugin, V.; Reznik, A.; Yudin, A.; Zhuravleva, N.
2017-11-01
The spectral-correlation method was described for diagnostics of optically inhomogeneous biological objects and materials of natural origin. The interrelation between parameters of the studied objects and parameters of the cross correlation function of speckle patterns produced by scattering of coherent light at different wavelengths is shown for thickness, optical density and internal structure of the material. A detailed study was performed for cellulose electric insulating paper with different parameters.
Gradient augmented level set method for phase change simulations
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Dontje, T.; Lippert, Th.; Petkov, N.; Schilling, K.
1992-01-01
Autocorrelation becomes an increasingly important tool to verify improvements in the state of the simulational art in Latice Gauge Theory. Semi-systolic and full-systolic algorithms are presented which are intensively used for correlation computations on the Connection Machine CM-2. The
Akçay, A.E.; Biller, B.
2014-01-01
We consider an assemble-to-order production system where the product demands and the time since the last customer arrival are not independent. The simulation of this system requires a multivariate input model that generates random input vectors with correlated discrete and continuous components. In
Simulation of neutral gas flow in a tokamak divertor using the Direct Simulation Monte Carlo method
International Nuclear Information System (INIS)
Gleason-González, Cristian; Varoutis, Stylianos; Hauer, Volker; Day, Christian
2014-01-01
Highlights: • Subdivertor gas flows calculations in tokamaks by coupling the B2-EIRENE and DSMC method. • The results include pressure, temperature, bulk velocity and particle fluxes in the subdivertor. • Gas recirculation effect towards the plasma chamber through the vertical targets is found. • Comparison between DSMC and the ITERVAC code reveals a very good agreement. - Abstract: This paper presents a new innovative scientific and engineering approach for describing sub-divertor gas flows of fusion devices by coupling the B2-EIRENE (SOLPS) code and the Direct Simulation Monte Carlo (DSMC) method. The present study exemplifies this with a computational investigation of neutral gas flow in the ITER's sub-divertor region. The numerical results include the flow fields and contours of the overall quantities of practical interest such as the pressure, the temperature and the bulk velocity assuming helium as model gas. Moreover, the study unravels the gas recirculation effect located behind the vertical targets, viz. neutral particles flowing towards the plasma chamber. Comparison between calculations performed by the DSMC method and the ITERVAC code reveals a very good agreement along the main sub-divertor ducts
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
International Nuclear Information System (INIS)
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-01
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Inferring the photometric and size evolution of galaxies from image simulations. I. Method
Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien
2017-09-01
Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
The multiphonon method as a dynamical approach to octupole correlations in deformed nuclei
International Nuclear Information System (INIS)
Piepenbring, R.
1986-09-01
The octupole correlations in nuclei are studied within the framework of the multiphonon method which is mainly the exact diagonalization of the total Hamiltonian in the space spanned by collective phonons. This treatment takes properly into account the Pauli principle. It is a microscopic approach based on a reflection symmetry of the potential. The spectroscopic properties of double even and odd-mass nuclei are nicely reproduced. The multiphonon method appears as a dynamical approach to octupole correlations in nuclei which can be compared to other models based on stable octupole deformation. 66 refs
Numerical Simulation of Tubular Pumping Systems with Different Regulation Methods
Zhu, Honggeng; Zhang, Rentian; Deng, Dongsheng; Feng, Xusong; Yao, Linbi
2010-06-01
Since the flow in tubular pumping systems is basically along axial direction and passes symmetrically through the impeller, most satisfying the basic hypotheses in the design of impeller and having higher pumping system efficiency in comparison with vertical pumping system, they are being widely applied to low-head pumping engineering. In a pumping station, the fluctuation of water levels in the sump and discharge pool is most common and at most time the pumping system runs under off-design conditions. Hence, the operation of pump has to be flexibly regulated to meet the needs of flow rates, and the selection of regulation method is as important as that of pump to reduce operation cost and achieve economic operation. In this paper, the three dimensional time-averaged Navier-Stokes equations are closed by RNG κ-ɛ turbulent model, and two tubular pumping systems with different regulation methods, equipped with the same pump model but with different designed system structures, are numerically simulated respectively to predict the pumping system performances and analyze the influence of regulation device and help designers make final decision in the selection of design schemes. The computed results indicate that the pumping system with blade-adjusting device needs longer suction box, and the increased hydraulic loss will lower the pumping system efficiency in the order of 1.5%. The pumping system with permanent magnet motor, by means of variable speed regulation, obtains higher system efficiency partly for shorter suction box and partly for different structure design. Nowadays, the varied speed regulation is realized by varied frequency device, the energy consumption of which is about 3˜4% of output power of the motor. Hence, when the efficiency of variable frequency device is considered, the total pumping system efficiency will probably be lower.
International Nuclear Information System (INIS)
Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi
2011-01-01
Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Daru, R.; Venemans, P.
1998-01-01
Visualisation, simulation and communication were always intimately interconnected. Visualisations and simulations impersonate existing or virtual realities. Without those tools it is arduous to communicate mental depictions about virtual objects and events. A communication model is presented to
Blank, D. G.; Morgan, J.
2017-12-01
Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.
Restoring method for missing data of spatial structural stress monitoring based on correlation
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Simulation as a Method of Teaching Communication for Multinational Corporations.
Stull, James B.; Baird, John W.
Interpersonal simulations may be used as a module in cultural awareness programs to provide realistic environments in which students, supervisors, and managers may practice communication skills that are effective in multicultural environments. To conduct and implement a cross-cultural simulation, facilitators should proceed through four stages:…
Reliability analysis based on a novel density estimation method for structures with correlations
Directory of Open Access Journals (Sweden)
Baoyu LI
2017-06-01
Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.
A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION
Directory of Open Access Journals (Sweden)
Y. Zhang
2016-06-01
Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.
Iritani, Takumi
2018-03-01
Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the "mirage problem," while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the Lüscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.
International Nuclear Information System (INIS)
Lissillour, R.; Guerillot, C.R.
1975-01-01
The self-correlated field method is based on the insertion in the group product wave function of pair functions built upon a set of correlated ''local'' functions and of ''nonlocal'' functions. This work is an application to three-electron systems. The effects of the outer electron on the inner pair are studied. The total electronic energy and some intermediary results such as pair energies, Coulomb and exchange ''correlated'' integrals, are given. The results are always better than those given by conventional SCF computations and reach the same level of accuracy as those given by more laborious methods used in correlation studies. (auth)
Research on criticality analysis method of CNC machine tools components under fault rate correlation
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald
2016-01-01
Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water. Copyright © 2015 Elsevier Ltd. All rights reserved.
Direct numerical simulation of turbulent pipe flow using the lattice Boltzmann method
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2018-03-01
In this paper, we present a first direct numerical simulation (DNS) of a turbulent pipe flow using the mesoscopic lattice Boltzmann method (LBM) on both a D3Q19 lattice grid and a D3Q27 lattice grid. DNS of turbulent pipe flows using LBM has never been reported previously, perhaps due to inaccuracy and numerical stability associated with the previous implementations of LBM in the presence of a curved solid surface. In fact, it was even speculated that the D3Q19 lattice might be inappropriate as a DNS tool for turbulent pipe flows. In this paper, we show, through careful implementation, accurate turbulent statistics can be obtained using both D3Q19 and D3Q27 lattice grids. In the simulation with D3Q19 lattice, a few problems related to the numerical stability of the simulation are exposed. Discussions and solutions for those problems are provided. The simulation with D3Q27 lattice, on the other hand, is found to be more stable than its D3Q19 counterpart. The resulting turbulent flow statistics at a friction Reynolds number of Reτ = 180 are compared systematically with both published experimental and other DNS results based on solving the Navier-Stokes equations. The comparisons cover the mean-flow profile, the r.m.s. velocity and vorticity profiles, the mean and r.m.s. pressure profiles, the velocity skewness and flatness, and spatial correlations and energy spectra of velocity and vorticity. Overall, we conclude that both D3Q19 and D3Q27 simulations yield accurate turbulent flow statistics. The use of the D3Q27 lattice is shown to suppress the weak secondary flow pattern in the mean flow due to numerical artifacts.
Directory of Open Access Journals (Sweden)
V. L. Kozlov
2018-01-01
Full Text Available To solve the problem of increasing the accuracy of restoring a three-dimensional picture of space using two-dimensional digital images, it is necessary to use new effective techniques and algorithms for processing and correlation analysis of digital images. Actively developed tools that allow you to reduce the time costs for processing stereo images, improve the quality of the depth maps construction and automate their construction. The aim of the work is to investigate the possibilities of using various techniques for processing digital images to improve the measurements accuracy of the rangefinder based on the correlation analysis of the stereo image. The results of studies of the influence of color channel mixing techniques on the distance measurements accuracy for various functions realizing correlation processing of images are presented. Studies on the analysis of the possibility of using integral representation of images to reduce the time cost in constructing a depth map areproposed. The results of studies of the possibility of using images prefiltration before correlation processing when distance measuring by stereo imaging areproposed.It is obtained that using of uniform mixing of channels leads to minimization of the total number of measurement errors, and using of brightness extraction according to the sRGB standard leads to an increase of errors number for all of the considered correlation processing techniques. Integral representation of the image makes it possible to accelerate the correlation processing, but this method is useful for depth map calculating in images no more than 0.5 megapixels. Using of image filtration before correlation processing can provide, depending on the filter parameters, either an increasing of the correlation function value, which is useful for analyzing noisy images, or compression of the correlation function.
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
International Nuclear Information System (INIS)
Tan, Cheng-Yang; Fermilab
2006-01-01
One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
System reliability with correlated components: Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
System reliability with correlated components : Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
Yarlagadda, Anuradha; Murthy, J.V.R.; Krishna Prasad, M.H.M.
2015-01-01
In the computer vision community, easy categorization of a person’s facial image into various age groups is often quite precise and is not pursued effectively. To address this problem, which is an important area of research, the present paper proposes an innovative method of age group classification system based on the Correlation Fractal Dimension of complex facial image. Wrinkles appear on the face with aging thereby changing the facial edges of the image. The proposed method is rotation an...
Improvement of the accuracy of noise measurements by the two-amplifier correlation method.
Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P
2013-10-01
We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.
Directory of Open Access Journals (Sweden)
Shumanova M.V.
2015-03-01
Full Text Available The process fish salting has been studied by the method of photon correlation spectroscopy; the distribution of salt concentration in the solution and herring flesh with skin has been found, diffusion coefficients and salt concentrations used for creating a mathematical model of the salting technology have been worked out; the possibility of determination by this method the coefficient of dynamic viscosity of solutions and different media (minced meat etc. has been considered
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
International Nuclear Information System (INIS)
Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-01-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-08-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0
Wang Hao; Gao Wen; Huang Qingming; Zhao Feng
2010-01-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...
Nuclear power plant training simulator system and method
International Nuclear Information System (INIS)
Ferguson, R.W.; Converse, R.E. Jr.
1975-01-01
A system is described for simulating the real-time dynamic operation of a full scope nuclear powered electrical generating plant for operator training utilizing apparatus that includes a control console with plant component control devices and indicating devices for monitoring plant operation. A general purpose digital computer calculates the dynamic simulation data for operating the indicating devices in accordance with the operation of the control devices. The functions for synchronization and calculation are arranged in a priority structure so as to insure an execution order that provides a maximum overlap of data exchange and simulation calculations. (Official Gazette)
Discrete simulation system based on artificial intelligence methods
Energy Technology Data Exchange (ETDEWEB)
Futo, I; Szeredi, J
1982-01-01
A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
International Nuclear Information System (INIS)
Wu Shengxing; Chen Xudong; Zhou Jikai
2012-01-01
Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
Modified enthalpy method for the simulation of melting and ...
Indian Academy of Sciences (India)
These include the implicit time stepping method of Voller & Cross. (1981), explicit enthalpy method of Tacke (1985), centroidal temperature correction method ... In variable viscosity method, viscosity is written as a function of liquid fraction.
Chen, Zhiwen
2017-01-01
Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...
International Nuclear Information System (INIS)
Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan
2014-01-01
Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography
Review of Vortex Methods for Simulation of Vortex Breakdown
National Research Council Canada - National Science Library
Levinski, Oleg
2001-01-01
The aim of this work is to identify current developments in the field of vortex breakdown modelling in order to initiate the development of a numerical model for the simulation of F/A-18 empennage buffet...
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang
2012-01-01
Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related
New methods for simulation of fractional Brownian motion
International Nuclear Information System (INIS)
Yin, Z.M.
1996-01-01
We present new algorithms for simulation of fractional Brownian motion (fBm) which comprises a set of important random functions widely used in geophysical and physical modeling, fractal image (landscape) simulating, and signal processing. The new algorithms, which are both accurate and efficient, allow us to generate not only a one-dimensional fBm process, but also two- and three-dimensional fBm fields. 23 refs., 3 figs
NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.
Energy Technology Data Exchange (ETDEWEB)
LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.
2005-09-12
Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.
On Partitioned Simulation of Electrical Circuits using Dynamic Iteration Methods
Ebert, Falk
2008-01-01
Im Rahmen dieser Arbeit wird die partitionierte Simulation elektrischer Schaltkreise untersucht. Hierbei handelt es sich um eine Technik, verschiedene Teile eines Schaltkreises auf unterschiedliche Weise numerisch zu behandeln um eine Simulation für den Gesamtkreis zu erhalten. Dabei wird besonderes Augenmerk auf zwei Dinge gelegt. Zum einen sollen sämtliche analytischen Resultate eine graphentheoretische Interpretation zulassen. Diese Bedingung resultiert daraus, dass Schaltkreisgleichungen ...
Sakamoto, Shinichi; Otsuru, Toru
2014-01-01
This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.
Simulation-based computation of the workload correlation function in a Lévy-driven queue
Glynn, P.W.; Mandjes, M.
2011-01-01
In this paper we consider a single-server queue with Lévy input, and, in particular, its workload process (Qt)t≥0, focusing on its correlation structure. With the correlation function defined as r(t):= cov(Q0, Qt) / varQ0 (assuming that the workload process is in stationarity at time 0), we first
Simulation-based computation of the workload correlation function in a Levy-driven queue
P. Glynn; M.R.H. Mandjes (Michel)
2009-01-01
htmlabstractIn this paper we consider a single-server queue with Levy input, and in particular its workload process (Q_t), focusing on its correlation structure. With the correlation function defined as r(t) := Cov(Q_0, Q_t)/Var Q_0 (assuming the workload process is in stationarity at time 0), we
Simulation-based computation of the workload correlation function in a Lévy-driven queue
P. Glynn; M.R.H. Mandjes (Michel)
2010-01-01
htmlabstractIn this paper we consider a single-server queue with Levy input, and in particular its workload process (Q_t), focusing on its correlation structure. With the correlation function defined as r(t) := Cov(Q_0,Q_t)/Var(Q_0) (assuming the workload process is in stationarity at time 0), we
International Nuclear Information System (INIS)
Chun, Moon Hyun; Oh, Jae Guen
1989-01-01
Ten methods of the total two-phase pressure drop prediction based on five existing models and correlations have been examined for their accuracy and applicability to pressurized water reactor conditions. These methods were tested against 209 experimental data of local and bulk boiling conditions: Each correlations were evaluated for different ranges of pressure, mass velocity and quality, and best performing models were identified for each data subsets. A computer code entitled 'K-TWOPD' has been developed to calculate the total two phase pressure drop using the best performing existing correlations for a specific property range and a correction factor to compensate for the predicted error of the selected correlations. Assessment of this code shows that the present method fits all the available data within ±11% at a 95% confidence level compared with ± 25% for the existing correlations. (Author)
Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.
2017-12-01
This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.
International Nuclear Information System (INIS)
Basovets, S.K.; Krupyanskij, Yu.F.; Kurinov, I.V.; Suzdalev, I.P.; Goldanskij, V.I.; Uporov, I.V.; Shaitan, K.V.; Rubin, A.B.
1988-01-01
A method of Moessbauer Fourier spectroscopy is developed to determine the correlation function of coordinates of a macromolecular system. The method does not require the use of an a priori dynamic model. The application of the method to the analysis of RSMR data for human serum albumin has demonstrated considerable changes in the dynamic behavior of the protein globule when the temperature is changed from 270 to 310 K. The main conclusions of the present work is the simultaneous observation of low-frequency (τ≥10 -9 sec) and high-frequency (τ -9 sec) large-scaled motions, that is the two-humped distribution of correlation times of protein motions. (orig.)
Hamanaka, Ryo; Yamaoka, Satoshi; Anh, Tuan Nguyen; Tominaga, Jun-Ya; Koga, Yoshiyuki; Yoshida, Noriaki
2017-11-01
Although many attempts have been made to simulate orthodontic tooth movement using the finite element method, most were limited to analyses of the initial displacement in the periodontal ligament and were insufficient to evaluate the effect of orthodontic appliances on long-term tooth movement. Numeric simulation of long-term tooth movement was performed in some studies; however, neither the play between the brackets and archwire nor the interproximal contact forces were considered. The objectives of this study were to simulate long-term orthodontic tooth movement with the edgewise appliance by incorporating those contact conditions into the finite element model and to determine the force system when the space is closed with sliding mechanics. We constructed a 3-dimensional model of maxillary dentition with 0.022-in brackets and 0.019 × 0.025-in archwire. Forces of 100 cN simulating sliding mechanics were applied. The simulation was accomplished on the assumption that bone remodeling correlates with the initial tooth displacement. This method could successfully represent the changes in the moment-to-force ratio: the tooth movement pattern during space closure. We developed a novel method that could simulate the long-term orthodontic tooth movement and accurately determine the force system in the course of time by incorporating contact boundary conditions into finite element analysis. It was also suggested that friction is progressively increased during space closure in sliding mechanics. Copyright © 2017. Published by Elsevier Inc.
Kim, Jaewook; Lee, W.-J.; Jhang, Hogun; Kaang, H. H.; Ghim, Y.-C.
2017-10-01
Stochastic magnetic fields are thought to be as one of the possible mechanisms for anomalous transport of density, momentum and heat across the magnetic field lines. Kubo number and Chirikov parameter are quantifications of the stochasticity, and previous studies show that perpendicular transport strongly depends on the magnetic Kubo number (MKN). If MKN is smaller than one, diffusion process will follow Rechester-Rosenbluth model; whereas if it is larger than one, percolation theory dominates the diffusion process. Thus, estimation of Kubo number plays an important role to understand diffusion process caused by stochastic magnetic fields. However, spatially localized experimental measurement of fluctuating magnetic fields in a tokamak is difficult, and we attempt to estimate MKNs using BOUT + + simulation data with pedestal collapse. In addition, we calculate correlation length of fluctuating pressures and Chirikov parameters to investigate variation correlation lengths in the simulation. We, then, discuss how one may experimentally estimate MKNs.
International Nuclear Information System (INIS)
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain
2009-01-01
The number of negatively charged nitrogen-vacancy centers (N-V) - in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V) - fluorophores and simulating the probability distribution of their effective numbers (N e ), we found that the actual number (N a ) of the fluorophores is in linear correlation with N e , with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N a =8±1 for 28 nm FND particles prepared by 3 MeV proton irradiation
Mathematical correlation of modal-parameter-identification methods via system-realization theory
Juang, Jer-Nan
1987-01-01
A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.
Mathematical correlation of modal parameter identification methods via system realization theory
Juang, J. N.
1986-01-01
A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.
Numerical Simulation of Shear Slitting Process of Grain Oriented Silicon Steel using SPH Method
Directory of Open Access Journals (Sweden)
Bohdal Łukasz
2017-12-01
Full Text Available Mechanical cutting allows separating of sheet material at low cost and therefore remains the most popular way to produce laminations for electrical machines and transformers. However, recent investigations revealed the deteriorating effect of cutting on the magnetic properties of the material close to the cut edge. The deformations generate elastic stresses in zones adjacent to the area of plastically deformed and strongly affect the magnetic properties. The knowledge about residual stresses is necessary in designing the process. This paper presents the new apprach of modeling residual stresses induced in shear slitting of grain oriented electrical steel using mesh-free method. The applications of SPH (Smoothed Particle Hydrodynamics methodology to the simulation and analysis of 3D shear slitting process is presented. In experimental studies, an advanced vision-based technology based on digital image correlation (DIC for monitoring the cutting process is used.
2002-07-01
The purpose of the work is to validate the safety assessment methodology previously developed for passenger rail vehicle dynamics, which requires the application of simulation tools as well as testing of vehicles under different track scenarios. This...
Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design
Ang, Chee Siang; Zaphiris, Panayiotis
We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.
The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method
Directory of Open Access Journals (Sweden)
Dipakkumar Gohil
2012-06-01
Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua
2011-01-01
the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn
Lackey, J.; Hadfield, C.
1992-01-01
Recent mishaps and incidents on Class IV aircraft have shown a need for establishing quantitative longitudinal high angle of attack (AOA) pitch control margin design guidelines for future aircraft. NASA Langley Research Center has conducted a series of simulation tests to define these design guidelines. Flight test results have confirmed the simulation studies in that pilot rating of high AOA nose-down recoveries were based on the short-term response interval in the forms of pitch acceleration and rate.
Vargas, Carlos; Sierra, Juan; Posada, Juan; Botero-Cadavid, Juan F.
2017-01-01
ABSTRACT The injection molding process is the most widely used processing technique for polymers. The analysis of residual stresses generated during this process is crucial for the part quality assessment. The present study evaluates the residual stresses in a tensile strength specimen using the simulation software Moldex3D for two polymers, polypropylene and polycarbonate. The residual stresses obtained under a simulated design of experiment were modeled using a robust multivariable regressi...
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L., E-mail: prii.ramos@gmail.com, E-mail: camunita@ipen.br, E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)
2017-07-01
The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)
International Nuclear Information System (INIS)
Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L.
2017-01-01
The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)
A Method for Correlation of Gravestone Weathering and Air Quality (SO2), West Amidlands, UK
Carlson, Michael John
From the beginning of the Industrial Revolution through the environmental revolution of the 1970s Britain suffered the effects of poor air quality primarily from particulate matter and acid in the form of NOx and SO x compounds. Air quality stations across the region recorded SO 2 beginning in the 1960s however the direct measurement of air quality prior to 1960 is lacking and only anecdotal notations exist. Proxy records including lung tissue samples, particulates in sediments cores, lake acidification studies and gravestone weathering have all been used to reconstruct the history of air quality. A 120-year record of acid deposition reconstructed from lead-lettered marble gravestone weathering combined with SO2 measurements from the air monitoring network across the West Midlands, UK region beginning in the 1960s form the framework for this study. The study seeks to create a spatial and temporal correlation between the gravestone weathering and measured SO 2. Successful correlation of the dataset from 1960s to the 2000s would allow a paleo-air quality record to be generated from the 120-year record of gravestone weathering. Decadal gravestone weathering rates can be estimated by non-linear regression analysis of stone loss at individual cemeteries. Gravestone weathering rates are interpolated across the region through Empirical Bayesian Kriging (EBK) methods performed through ArcGISRTM and through a land use based approach based on digitized maps of land use. Both methods of interpolation allow for the direct correlation of gravestone weathering and measured SO2 to be made. Decadal scale correlations of gravestone weathering rates and measured SO2 are very weak and non-existent for both EBK and the land use based approach. Decadal results combined together on a larger scale for each respective method display a better visual correlation. However, the relative clustering of data at lower SO2 concentrations and the lack of data at higher SO2 concentrations make the
Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.
2016-12-01
The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.
The null-event method in computer simulation
International Nuclear Information System (INIS)
Lin, S.L.
1978-01-01
The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)
International Nuclear Information System (INIS)
Watanabe, Yuichi; Oikawa, Tetsukuni; Muramatsu, Ken
2003-01-01
This paper presents a new calculation method for considering the effect of correlation of component failures in seismic probabilistic safety assessment (PSA) of nuclear power plants (NPPs) by direct quantification of Fault Tree (FT) using the Monte Carlo simulation (DQFM) and discusses the effect of correlation on core damage frequency (CDF). In the DQFM method, occurrence probability of a top event is calculated as follows: (1) Response and capacity of each component are generated according to their probability distribution. In this step, the response and capacity can be made correlated according to a set of arbitrarily given correlation data. (2) For each component whether the component is failed or not is judged by comparing the response and the capacity. (3) The status of each component, failure or success, is assigned as either TRUE or FALSE in a Truth Table, which represents the logical structure of the FT to judge the occurrence of the top event. After this trial is iterated sufficient times, the occurrence probability of the top event is obtained as the ratio of the occurrence number of the top event to the number of total iterations. The DQFM method has the following features compared with the minimal cut set (MCS) method used in the well known Seismic Safety Margins Research Program (SSMRP). While the MCS method gives the upper bound approximation for occurrence probability of an union of MCSs, the DQFM method gives more exact results than the upper bound approximation. Further, the DQFM method considers the effect of correlation on the union and intersection of component failures while the MCS method considers only the effect on the latter. The importance of these features in seismic PSA of NPPs are demonstrated by an example calculation and a calculation of CDF in a seismic PSA. The effect of correlation on CDF was evaluated by the DQFM method and was compared with that evaluated in the application study of the SSMRP methodology. In the application
A hybrid measure-correlate-predict method for long-term wind condition assessment
International Nuclear Information System (INIS)
Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias
2014-01-01
Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze
Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation
Van Esch, J.M.
2010-01-01
Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses
Crop canopy BRDF simulation and analysis using Monte Carlo method
Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.
2006-01-01
This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Directory of Open Access Journals (Sweden)
E Ghasemikhah
2012-03-01
Full Text Available This study investigated the electronic properties of antiferromagnetic UBi2 metal by using ab initio calculations based on the density functional theory (DFT, employing the augmented plane waves plus local orbital method. We used the exact exchange for correlated electrons (EECE method to calculate the exchange-correlation energy under a variety of hybrid functionals. Electric field gradients (EFGs at the uranium site in UBi2 compound were calculated and compared with the experiment. The EFGs were predicted experimentally at the U site to be very small in this compound. The EFG calculated by the EECE functional are in agreement with the experiment. The densities of states (DOSs show that 5f U orbital is hybrided with the other orbitals. The plotted Fermi surfaces show that there are two kinds of charges on Fermi surface of this compound.
Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio
1999-10-26
A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.
A Ten-Step Design Method for Simulation Games in Logistics Management
Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.
2011-01-01
Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a
Energy Technology Data Exchange (ETDEWEB)
Anjos, Roselaine M. dos; Maitelli, Carla Wilza S.P.; Maitelli, Andre L. [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil); Costa, Rutacio O. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)
2012-07-01
Electrical Submersible Pumping (ESP) is an artificial lifting method which can be used both onshore and offshore for the production of high flow rates of liquid. By using the computational simulator for systems ESP developed by the AUTOPOC/LAUT - UFRN, this work aimed to evaluate empirical correlations for calculation of multiphase flow in tubing typical of artificial lifting systems operating by ESP. The parameters used for evaluating the correlations are some of the dynamic variables of the system such as head that indicates the lifting capacity of the system, the flow rate of fluid in the pump and the discharge pressure at the pump. Five (5) correlations were evaluated, from which only one considered slipping between phases, but does not take into account flow patterns and, four others considering slipping between the phases as well the flow patterns. The simulation results obtained for all these correlations were compared to results from a commercial computational simulator, extensively used in the oil industry. For both simulators, input values and simulation time, were virtually the same. The results showed that the simulator used in this work showed satisfactory performance, since no significant differences from those obtained with the commercial simulator. (author)
Analytic methods for the Percus-Yevick hard sphere correlation functions
Directory of Open Access Journals (Sweden)
D. Henderson
2009-01-01
Full Text Available The Percus-Yevick theory for hard spheres provides simple accurate expressions for the correlation functions that have proven exceptionally useful. A summary of the author's lecture notes concerning three methods of obtaining these functions are presented. These notes are original only in part. However, they contain some helpful steps and simplifications. The purpose of this paper is to make these notes more widely available.
Sulaimon, Shodiya; Nasution, Henry; Aziz, Azhar Abdul; Abdul-Rahman, Abdul-Halim; Darus, Amer N
2014-01-01
The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM). The Taguchi method, a statistical experimental design approach, was employed. This approach e...
Linear-scaling explicitly correlated treatment of solids: Periodic local MP2-F12 method
Energy Technology Data Exchange (ETDEWEB)
Usvyat, Denis, E-mail: denis.usvyat@chemie.uni-regensburg.de [Institute of Physical and Theoretical Chemistry, University of Regensburg, Universitätsstraße 31, D-93040 Regensburg (Germany)
2013-11-21
Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.
DEFF Research Database (Denmark)
Cheng, Hongyuan; Kontogeorgis, Georgios; Stenby, Erling Halfdan
2005-01-01
), the bioconcentration factor (BCF), and the toxicity. Kow values of alcohol ethoxylates are difficult to measure. Existing methods such as those in commercial software like ACD,ClogP and KowWin have not been applied to surfactants, and they fail for heavy alcohol ethoxylates (alkyl carbon numbers above 12). Thus...... and toxicity of alcohol ethoxylates are correlated with their Kow. The proposed approach can be extended to other families of nonionic surfactants....
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-09-10
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.
Ying, Yingzi; Bean, Christopher J.
2014-05-01
Ocean-generated microseisms are faint Earth tremors associated with the interaction between ocean water waves and the solid Earth. The microseism noise recorded as low frequency ground vibrations by seismometers contains significant information about the Earth's interior and the sea states. In this work, we first aim to investigate the forward propagation of microseisms in a deep-ocean environment. We employ a 3D North-East Atlantic geological model and simulate wave propagation in a coupled fluid-solid domain, using a spectral-element method. The aim is to investigate the effects of the continental shelf on microseism wave propagation. A second goal of this work is to perform noise simulation to calculate synthetic ensemble averaged cross-correlations of microseism noise signals with time reversal method. The algorithm can relieve computational cost by avoiding time stacking and get cross-correlations between the designated master station and all the remaining slave stations, at one time. The origins of microseisms are non-uniform, so we also test the effect of simulated noise source distribution on the determined cross-correlations.
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Frota, Oleci Pereira; Ferreira, Adriano Menis; Guerra, Odanir Garcia; Rigotti, Marcelo Alessandro; Andrade, Denise de; Borges, Najla Moreira Amaral; Almeida, Margarete Teresa Gottardo de
2017-01-01
ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D) of high-touch clinical surfaces (HTCS) in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity an...
The Diagnosis of Internal Leakage of Control Valve Based on the Grey Correlation Analysis Method
Directory of Open Access Journals (Sweden)
Zheng DING
2014-07-01
Full Text Available The valve plays an important part in the industrial automation system. Whether it operates normally or not relates with the quality of the products directly while its faults are relatively common because of bad working conditions. And the internal leakage is one of the common faults. Consequently, this paper sets up the experimental platform to make the valve in different working condition and collect relevant data online. Then, diagnose the internal leakage of the valve by using the grey correlation analysis method. The results show that this method can not only diagnose the internal leakage of valve accurately, but also distinguish fault degree quantitatively.
International Nuclear Information System (INIS)
Pott, R.A.; Koch, W.; Leitner, L.
1986-01-01
The orientation of the easy magnetization axis of magnetic particles is a key parameter of the recording performance of magnetic recording media. Usually the orientation is measured by magnetic methods, but the applicability of the Moessbauer Spectroscopy has also been shown in the past. The authors show and discuss the correlations between the results obtained by magnetic and Moessbauer measurements for the example of several magnetic tapes. They demonstrate that by a combination of both methods one is even able to estimate the mean canting angles distribution width of the easy axis of magnetization. (Auth.)
Buehring, B; Siglinsky, E; Krueger, D; Evans, W; Hellerstein, M; Yamada, Y; Binkley, N
2018-03-01
DXA-measured lean mass is often used to assess muscle mass but has limitations. Thus, we compared DXA lean mass with two novel methods-bioelectric impedance spectroscopy and creatine (methyl-d3) dilution. The examined methodologies did not measure lean mass similarly and the correlation with muscle biomarkers/function varied. Muscle function tests predict adverse health outcomes better than lean mass measurement. This may reflect limitations of current mass measurement methods. Newer approaches, e.g., bioelectric impedance spectroscopy (BIS) and creatine (methyl-d3) dilution (D3-C), may more accurately assess muscle mass. We hypothesized that BIS and D3-C measured muscle mass would better correlate with function and bone/muscle biomarkers than DXA measured lean mass. Evaluations of muscle/lean mass, function, and serum biomarkers were obtained in older community-dwelling adults. Mass was assessed by DXA, BIS, and orally administered D3-C. Grip strength, timed up and go, and jump power were examined. Potential muscle/bone serum biomarkers were measured. Mass measurements were compared with functional and serum data using regression analyses; differences between techniques were determined by paired t tests. Mean (SD) age of the 112 (89F/23M) participants was 80.6 (6.0) years. The lean/muscle mass assessments were correlated (.57-.88) but differed (p Lean mass measures were unrelated to the serum biomarkers measured. These three methodologies do not similarly measure muscle/lean mass and should not be viewed as being equivalent. Functional tests assessing maximal muscle strength/power (grip strength and jump power) correlated with all mass measures whereas gait speed was not. None of the selected serum measures correlated with mass. Efforts to optimize muscle mass assessment and identify their relationships with health outcomes are needed.
Truncated Newton-Raphson Methods for Quasicontinuum Simulations
National Research Council Canada - National Science Library
Liang, Yu; Kanapady, Ramdev; Chung, Peter W
2006-01-01
.... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...
Entropy correlation distance method. The Euro introduction effect on the Consumer Price Index
Miśkiewicz, Janusz
2010-04-01
The idea of entropy was introduced in thermodynamics, but it can be used in time series analysis. There are various ways to define and measure the entropy of a system. Here the so called Theil index, which is often used in economy and finance, is applied as it were an entropy measure. In this study the time series are remapped through the Theil index. Then the linear correlation coefficient between the remapped time series is evaluated as a function of time and time window size and the corresponding statistical distance is defined. The results are compared with the the usual correlation distance measure for the time series themselves. As an example this entropy correlation distance method (ECDM) is applied to several series, as those of the Consumer Price Index (CPI) in order to test some so called globalisation processes. Distance matrices are calculated in order to construct two network structures which are next analysed. The role of two different time scales introduced by the Theil index and a correlation coefficient is also discussed. The evolution of the mean distance between the most developed countries is presented and the globalisation periods of the prices discussed. It is finally shown that the evolution of mean distance between the most developed countries on several networks follows the process of introducing the European currency - the Euro. It is contrasted to the GDP based analysis. It is stressed that the entropy correlation distance measure is more suitable in detecting significant changes, like a globalisation process than the usual statistical (correlation based) measure.
Numerical simulation of GEW equation using RBF collocation method
Directory of Open Access Journals (Sweden)
Hamid Panahipour
2012-08-01
Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.
Shang, Yu; Li, Ting; Chen, Lei; Lin, Yu; Toborek, Michal; Yu, Guoqiang
2014-05-01
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αDB) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αDB. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αDB (errors values of errors in extracting αDB were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αDB using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
External individual monitoring: experiments and simulations using Monte Carlo Method
International Nuclear Information System (INIS)
Guimaraes, Carla da Costa
2005-01-01
In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF 2 :NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF 2 :NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF 2 :NaCl compound estimated by simulation to be 2,20(25) mm -1 was introduced. Conversion coefficients C p from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low
Irreducible Greens' Functions method in the theory of highly correlated systems
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1994-09-01
The self-consistent theory of the correlation effects in Highly Correlated Systems (HCS) is presented. The novel Irreducible Green's Function (IGF) method is discussed in detail for the Hubbard model and random Hubbard model. The interpolation solution for the quasiparticle spectrum, which is valid for both the atomic and band limit is obtained. The (IGF) method permits to calculate the quasiparticle spectra of many-particle systems with the complicated spectra and strong interaction in a very natural and compact way. The essence of the method deeply related to the notion of the Generalized Mean Fields (GMF), which determine the elastic scattering corrections. The inelastic scattering corrections leads to the damping of the quasiparticles and are the main topic of the present consideration. The calculation of the damping has been done in a self-consistent way for both limits. For the random Hubbard model the weak coupling case has been considered and the self-energy operator has been calculated using the combination of the IGF method and Coherent Potential Approximation (CPA). The other applications of the method to the s-f model, Anderson model, Heisenberg antiferromagnet, electron-phonon interaction models and quasiparticle tunneling are discussed briefly. (author). 79 refs
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Directory of Open Access Journals (Sweden)
Oleci Pereira Frota
Full Text Available ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D of high-touch clinical surfaces (HTCS in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity and stains were detected in visual inspection; when ≥2.5 colony forming units per cm2 were found in culture; when ≥5 relative light units per cm2 were found at the ATP-bioluminescence assay. Results: 720 analyses were performed, 240 per method. The overall rates of clean surfaces per visual inspection, culture and ATP-bioluminescence assay were 8.3%, 20.8% and 44.2% before C&D, and 92.5%, 50% and 84.2% after C&D, respectively (p<0.001. There were only occasional statistically significant relationships between methods. Conclusion: the methods did not present a good correlation, neither quantitative nor qualitatively.
Fast Multilevel Panel Method for Wind Turbine Rotor Flow Simulations
van Garrel, Arne; Venner, Cornelis H.; Hoeijmakers, Hendrik Willem Marie
2017-01-01
A fast multilevel integral transform method has been developed that enables the rapid analysis of unsteady inviscid flows around wind turbines rotors. A low order panel method is used and the new multi-level multi-integration cluster (MLMIC) method reduces the computational complexity for
Simulation methods for multiperiodic and aperiodic nanostructured dielectric waveguides
DEFF Research Database (Denmark)
Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina
2017-01-01
on Rudin–Shapiro, Fibonacci, and Thue–Morse binary sequences. The near-field and far-field properties are computed employing the finite-element method (FEM), the finite-difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). The results show that all three methods...
Simulating elastic light scattering using high performance computing methods
Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.
1993-01-01
The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the
Geometry optimization of zirconium sulfophenylphosphonate layers by molecular simulation methods
Czech Academy of Sciences Publication Activity Database
Škoda, J.; Pospíšil, M.; Kovář, P.; Melánová, Klára; Svoboda, J.; Beneš, L.; Zima, Vítězslav
2018-01-01
Roč. 24, č. 1 (2018), s. 1-12, č. článku 10. ISSN 1610-2940 R&D Projects: GA ČR(CZ) GA14-13368S; GA ČR(CZ) GA17-10639S Institutional support: RVO:61389013 Keywords : zirconium sulfophenylphosphonate * intercalation * molecular simulation Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 1.425, year: 2016
Absolute efficiency calibration of HPGe detector by simulation method
International Nuclear Information System (INIS)
Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.
2018-01-01
High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated
Quantum mechanical simulation methods for studying biological systems
International Nuclear Information System (INIS)
Bicout, D.; Field, M.
1996-01-01
Most known biological mechanisms can be explained using fundamental laws of physics and chemistry and a full understanding of biological processes requires a multidisciplinary approach in which all the tools of biology, chemistry and physics are employed. An area of research becoming increasingly important is the theoretical study of biological macromolecules where numerical experimentation plays a double role of establishing a link between theoretical models and predictions and allowing a quantitative comparison between experiments and models. This workshop brought researchers working on different aspects of the development and application of quantum mechanical simulation together, assessed the state-of-the-art in the field and highlighted directions for future research. Fourteen lectures (theoretical courses and specialized seminars) deal with following themes: 1) quantum mechanical calculations of large systems, 2) ab initio molecular dynamics where the calculation of the wavefunction and hence the energy and forces on the atoms for a system at a single nuclear configuration are combined with classical molecular dynamics algorithms in order to perform simulations which use a quantum mechanical potential energy surface, 3) quantum dynamical simulations, electron and proton transfer processes in proteins and in solutions and finally, 4) free seminars that helped to enlarge the scope of the workshop. (N.T.)
International Nuclear Information System (INIS)
Keil, Fabian
2014-01-01
Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5
Energy Technology Data Exchange (ETDEWEB)
Keil, Fabian
2014-03-20
Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5
Residual stresses measurement by using ring-core method and 3D digital image correlation technique
International Nuclear Information System (INIS)
Hu, Zhenxing; Xie, Huimin; Zhu, Jianguo; Wang, Huaixi; Lu, Jian
2013-01-01
Ring-core method/three-dimensional digital image correlation (3D DIC) residual stresses measurement is proposed. Ring-core cutting is a mechanical stress relief method, and combining with 3D DIC system the deformation of the specimen surface can be measured. An optimization iteration method is proposed to obtain the residual stress and rigid-body motion. The method has the ability to cut an annular trench at a different location out of the field of view. A compression test is carried out to demonstrate how residual stress is determined by using 3D DIC system and outfield measurement. The results determined by the approach are in good agreement with the theoretical value. Ring-core/3D DIC has shown its robustness to determine residual stress and can be extended to application in the engineering field. (paper)
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
Shinogle-Decker, Heather; Martinez-Rivera, Noraida; O'Brien, John; Powell, Richard D.; Joshi, Vishwas N.; Connell, Samuel; Rosa-Molinar, Eduardo
2018-02-01
A new correlative Förster Resonance Energy Transfer (FRET) microscopy method using FluoroNanogold™, a fluorescent immunoprobe with a covalently attached Nanogold® particle (1.4nm Au), overcomes resolution limitations in determining distances within synaptic nanoscale architecture. FRET by acceptor photobleaching has long been used as a method to increase fluorescence resolution. The transfer of energy from a donor to an acceptor generally occurs between 10-100Å, which is the relative distance between the donor molecule and the acceptor molecule. For the correlative FRET microscopy method using FluoroNanogold™, we immuno-labeled GFP-tagged-HeLa-expressing Connexin 35 (Cx35) with anti-GFP and with anti-Cx35/36 antibodies, and then photo-bleached the Cx before processing the sample for electron microscopic imaging. Preliminary studies reveal the use of Alexa Fluor® 594 FluoroNanogold™ slightly increases FRET distance to 70Å, in contrast to the 62.5Å using AlexaFluor 594®. Preliminary studies also show that using a FluoroNanogold™ probe inhibits photobleaching. After one photobleaching session, Alexa Fluor 594® fluorescence dropped to 19% of its original fluorescence; in contrast, after one photobleaching session, Alexa Fluor 594® FluoroNanogold™ fluorescence dropped to 53% of its original intensity. This result confirms that Alexa Fluor 594® FluoroNanogold™ is a much better donor probe than is Alexa Fluor 594®. The new method (a) creates a double confirmation method in determining structure and orientation of synaptic architecture, (b) allows development of a two-dimensional in vitro model to be used for precise testing of multiple parameters, and (c) increases throughput. Future work will include development of FluoroNanogold™ probes with different sizes of gold for additional correlative microscopy studies.
Directory of Open Access Journals (Sweden)
Taghi Baghdadi
2017-05-01
Full Text Available Background: The aim of this study was to evaluate the idiopathic congenital clubfoot deformity treated by Ponseti method to determine the different factors such as radiological investigations that may have relations with the risk of failure and recurrence in mid-term follow-up of the patients. Methods: Since 2006 to 2011, 226 feet from 149 patients with idiopathic congenital clubfoot were treated with weekly castings by Ponseti method. Anteroposterior and lateral foot radiographies were performed at the final follow-up visit and the data from clinical and radiological outcomes were analysed. Results: In our patients, 191(84.9% feet required percutaneous tenotomy. The successful correction rate was 92% indication no need for further surgical correction. No significant correlation was found between the remained deformity rate and the severity of the deformity and compliance of using the brace (P=0.108 and 0.207 respectively. The remained deformity rate had an inverse association with the beginning age of treatment (P=0.049. No significant correlation was found between the percutaneous tetonomy and passive dorsiflexion range (P=0.356. Conclusion: According to our results treatment with the Ponseti method resulted in poor or no correlation. The diagnosis of clubfoot is a clinical judgment; therefore, the outcome of the treatment must only be clinically evaluated. Although the Ponseti method can retrieve the normal shape of the foot, it fails to treat the bone deformities and eventually leads to remained radiologic deformity. Further studiesare suggested to define a different modification that can address the abnormal angles between the foot and ankle bones to minimize the risk of recurrence.
Flat-histogram methods in quantum Monte Carlo simulations: Application to the t-J model
International Nuclear Information System (INIS)
Diamantis, Nikolaos G.; Manousakis, Efstratios
2016-01-01
We discuss that flat-histogram techniques can be appropriately applied in the sampling of quantum Monte Carlo simulation in order to improve the statistical quality of the results at long imaginary time or low excitation energy. Typical imaginary-time correlation functions calculated in quantum Monte Carlo are subject to exponentially growing errors as the range of imaginary time grows and this smears the information on the low energy excitations. We show that we can extract the low energy physics by modifying the Monte Carlo sampling technique to one in which configurations which contribute to making the histogram of certain quantities flat are promoted. We apply the diagrammatic Monte Carlo (diag-MC) method to the motion of a single hole in the t-J model and we show that the implementation of flat-histogram techniques allows us to calculate the Green's function in a wide range of imaginary-time. In addition, we show that applying the flat-histogram technique alleviates the “sign”-problem associated with the simulation of the single-hole Green's function at long imaginary time. (paper)
Directory of Open Access Journals (Sweden)
Shodiya Sulaimon
2014-07-01
Full Text Available The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM. The Taguchi method, a statistical experimental design approach, was employed. This approach explores the economic benefit that lies in studies of this nature, where only a small number of experiments are required and yet valid results are obtained. Considering the effects of the capillary tube geometry and the inlet condition of the tube, dimensionless parameters were chosen. The new correlation was also based on the Buckingham Pi theorem. This correlation predicts 86.67% of the present experimental data within a relative deviation of -10% to +10%. The predictions by this correlation were also compared with results in published literature.
Energy Technology Data Exchange (ETDEWEB)
Onishi, Yasuo; Baer, Ellen BK; Chun, Jaehun; Yokuda, Satoru T.; Schmidt, Andrew J.; Sande, Susan; Buchmiller, William C.
2011-02-20
potential for erosion, it is important to compare the measured shear strength to penetrometer measurements and to develop a correlation (or correlations) between UCS measured by a pocket penetrometer and direct shear strength measurements for various homogeneous and heterogeneous simulants. This study developed 11 homogeneous simulants, whose shear strengths vary from 4 to 170 kPa. With these simulants, we developed correlations between UCS measured by a Geotest E-280 pocket penetrometer and shear strength values measured by a Geonor H-60 hand-held vane tester and a more sophisticated bench-top unit, the Haake M5 rheometer. This was achieved with side-by-side measurements of the shear strength and UCS of the homogeneous simulants. The homogeneous simulants developed under this study consist of kaolin clay, plaster of Paris, and amorphous alumina CP-5 with water. The simulants also include modeling clay. The shear strength of most of these simulants is sensitive to various factors, including the simulant size, the intensity of mixing, and the curing time, even with given concentrations of simulant components. Table S.1 summarizes these 11 simulants and their shear strengths.
Directory of Open Access Journals (Sweden)
Knight Chris
2017-01-01
Full Text Available Polydisperse granular materials are ubiquitous in nature and industry. Despite this, knowledge of the momentum coupling between the fluid and solid phases in dense saturated grain packings comes almost exclusively from empirical correlations [2–4, 8] with monosized media. The Immersed Boundary Method (IBM is a Computational Fluid Dynamics (CFD modelling technique capable of resolving pore scale fluid flow and fluid-particle interaction forces in polydisperse media at the grain scale. Validation of the IBM in the low Reynolds number, high concentration limit was performed by comparing simulations of flow through ordered arrays of spheres with the boundary integral results of Zick and Homsy [10]. Random grain packings were studied with linearly graded particle size distributions with a range of coefficient of uniformity values (Cu = 1.01, 1.50, and 2.00 at a range of concentrations (ϕ ∈ [0.396; 0.681] in order to investigate the influence of polydispersity on drag and permeability. The sensitivity of the IBM results to the choice of radius retraction parameter [1] was investigated and a comparison was made between the predicted forces and the widely used Ergun correlation [3].
Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel
International Nuclear Information System (INIS)
Xiang, Hao; Chen, Bin
2015-01-01
The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We 0.28 Fr 0.78 (We is the Weber number, Fr is the Froude number). (paper)
Lupi, Alessandro; Bovino, Stefano; Capelo, Pedro R.; Volonteri, Marta; Silk, Joseph
2018-03-01
In this study, we present a suite of high-resolution numerical simulations of an isolated galaxy to test a sub-grid framework to consistently follow the formation and dissociation of H2 with non-equilibrium chemistry. The latter is solved via the package KROME, coupled to the mesh-less hydrodynamic code GIZMO. We include the effect of star formation (SF), modelled with a physically motivated prescription independent of H2, supernova feedback and mass-losses from low-mass stars, extragalactic and local stellar radiation, and dust and H2 shielding, to investigate the emergence of the observed correlation between H2 and SF rate surface densities. We present two different sub-grid models and compare them with on-the-fly radiative transfer (RT) calculations, to assess the main differences and limits of the different approaches. We also discuss a sub-grid clumping factor model to enhance the H2 formation, consistent with our SF prescription, which is crucial, at the achieved resolution, to reproduce the correlation with H2. We find that both sub-grid models perform very well relative to the RT simulation, giving comparable results, with moderate differences, but at much lower computational cost. We also find that, while the Kennicutt-Schmidt relation for the total gas is not strongly affected by the different ingredients included in the simulations, the H2-based counterpart is much more sensitive, because of the crucial role played by the dissociating radiative flux and the gas shielding.
Prediction of shear wave velocity using empirical correlations and artificial intelligence methods
Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad
2014-06-01
Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
A New Method to Measure Crack Extension in Nuclear Graphite Based on Digital Image Correlation
Directory of Open Access Journals (Sweden)
Shigang Lai
2017-01-01
Full Text Available Graphite components, used as moderators, reflectors, and core-support structures in a High-Temperature Gas-Cooled Reactor, play an important role in the safety of the reactor. Specifically, they provide channels for the fuel elements, control rods, and coolant flow. Fracture is the main failure mode for graphite, and breaching of the above channels by crack extension will seriously threaten the safety of a reactor. In this paper, a new method based on digital image correlation (DIC is introduced for measuring crack extension in brittle materials. Cross-correlation of the displacements measured by DIC with a step function was employed to identify the advancing crack tip in a graphite beam specimen under three-point bending. The load-crack extension curve, which is required for analyzing the R-curve and tension softening behaviors, was obtained for this material. Furthermore, a sensitivity analysis of the threshold value employed for the cross-correlation parameter in the crack identification process was conducted. Finally, the results were verified using the finite element method.
Prediction of shear wave velocity using empirical correlations and artificial intelligence methods
Directory of Open Access Journals (Sweden)
Shahoo Maleki
2014-06-01
Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
Yao, X. F.; Xiong, T. C.; Xu, H. M.; Wan, J. P.; Long, G. R.
2008-11-01
The residual stresses of the PMMA (polymethyl methacrylate) specimens after being drilled, reamed and polished respectively are investigated using the digital speckle correlation experimental method,. According to the displacement fields around the correlated calculated region, the polynomial curve fitting method is used to obtain the continuous displacement fields, and the strain fields can be obtained from the derivative of the displacement fields. Considering the constitutive equation of the material, the expression of the residual stress can be presented. During the data processing, according to the fitting effect of the data, the calculation region of the correlated speckles and the degree of the polynomial fitting curve is decided. These results show that the maximum stress is at the hole-wall of the drilling hole specimen and with the increasing of the diameter of the drilled hole, the residual stress resulting from the hole drilling increases, whereas the process of reaming and polishing hole can reduce the residual stress. The relative large discrete degree of the residual stress is due to the chip removal ability of the drill bit, the cutting feed of the drill and other various reasons.
International Nuclear Information System (INIS)
Batta, A.; Class, A.; Litfin, K.; Wetzel, T.
2011-01-01
The Rehme correlation is the most common formula to estimate the pressure drop of spacers in the design phase of new bundle geometries. It is based on considerations of momentum losses and takes into account the obstruction of the flow cross section but it ignores the geometric details of the spacer design. Within the framework of accelerator driven sub-critical reactor systems (ADS), heavy-liquid-metal (HLM) cooled fuel assemblies are considered. At the KArlsruhe Liquid metal LAboratory (KALLA) of the Karlsruhe Institute of Technology a series of experiments to quantify both pressure losses and heat transfer in HLM-cooled rod bundles are performed. The present study compares simulation results obtained with the commercial CFD code Star-CCM to experiments and the Rehme correlation. It can be shown that the Rehme correlation, simulations and experiments all yield similar trends, but quantitative predictions can only be delivered by the CFD which takes into account the full geometric details of the spacer geometry. (orig.)
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith
Cinar, A. F.; Barhli, S. M.; Hollis, D.; Flansbjer, M.; Tomlinson, R. A.; Marrow, T. J.; Mostafavi, M.
2017-09-01
Digital image correlation has been routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. Accurate segmentation of the crack path is needed to study its interaction with the microstructure and stress fields, and studies of crack behaviour, such as the effect of closure or residual stress in fatigue, require data on its opening displacement. Such information can be obtained from any digital image correlation analysis of cracked components, but it collection by manual methods is quite onerous, particularly for massive amounts of data. We introduce the novel application of Phase Congruency to detect and quantify cracks and their opening. Unlike other crack detection techniques, Phase Congruency does not rely on adjustable threshold values that require user interaction, and so allows large datasets to be treated autonomously. The accuracy of the Phase Congruency based algorithm in detecting cracks is evaluated and compared with conventional methods such as Heaviside function fitting. As Phase Congruency is a displacement-based method, it does not suffer from the noise intensification to which gradient-based methods (e.g. strain thresholding) are susceptible. Its application is demonstrated to experimental data for cracks in quasi-brittle (Granitic rock) and ductile (Aluminium alloy) materials.
DEFF Research Database (Denmark)
Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove
2010-01-01
The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete......-Kutta methods, ESDIRK, Newton-Raphson, convergence control, error control, stepsize selection....
Mock ECHO: A Simulation-Based Medical Education Method.
Fowler, Rebecca C; Katzman, Joanna G; Comerci, George D; Shelley, Brian M; Duhigg, Daniel; Olivas, Cynthia; Arnold, Thomas; Kalishman, Summers; Monnette, Rebecca; Arora, Sanjeev
2018-04-16
This study was designed to develop a deeper understanding of the learning and social processes that take place during the simulation-based medical education for practicing providers as part of the Project ECHO® model, known as Mock ECHO training. The ECHO model is utilized to expand access to care of common and complex diseases by supporting the education of primary care providers with an interprofessional team of specialists via videoconferencing networks. Mock ECHO trainings are conducted through a train the trainer model targeted at leaders replicating the ECHO model at their organizations. Trainers conduct simulated teleECHO clinics while participants gain skills to improve communication and self-efficacy. Three focus groups, conducted between May 2015 and January 2016 with a total of 26 participants, were deductively analyzed to identify common themes related to simulation-based medical education and interdisciplinary education. Principal themes generated from the analysis included (a) the role of empathy in community development, (b) the value of training tools as guides for learning, (c) Mock ECHO design components to optimize learning, (d) the role of interdisciplinary education to build community and improve care delivery, (e) improving care integration through collaboration, and (f) development of soft skills to facilitate learning. Mock ECHO trainings offer clinicians the freedom to learn in a noncritical environment while emphasizing real-time multidirectional feedback and encouraging knowledge and skill transfer. The success of the ECHO model depends on training interprofessional healthcare providers in behaviors needed to lead a teleECHO clinic and to collaborate in the educational process. While building a community of practice, Mock ECHO provides a safe opportunity for a diverse group of clinician experts to practice learned skills and receive feedback from coparticipants and facilitators.
Computerized method for X-ray angular distribution simulation in radiological systems
International Nuclear Information System (INIS)
Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.
1996-01-01
A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field