WorldWideScience

Sample records for methods simulate correlated

  1. Method for numerical simulation of two-term exponentially correlated colored noise

    International Nuclear Information System (INIS)

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-01-01

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications

  2. Population models and simulation methods: The case of the Spearman rank correlation.

    Science.gov (United States)

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  3. Simulation of speckle patterns with pre-defined correlation distributions

    Science.gov (United States)

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.

    2016-01-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589

  4. Two-dimensional Simulations of Correlation Reflectometry in Fusion Plasmas

    International Nuclear Information System (INIS)

    Valeo, E.J.; Kramer, G.J.; Nazikian, R.

    2001-01-01

    A two-dimensional wave propagation code, developed specifically to simulate correlation reflectometry in large-scale fusion plasmas is described. The code makes use of separate computational methods in the vacuum, underdense and reflection regions of the plasma in order to obtain the high computational efficiency necessary for correlation analysis. Simulations of Tokamak Fusion Test Reactor (TFTR) plasma with internal transport barriers are presented and compared with one-dimensional full-wave simulations. It is shown that the two-dimensional simulations are remarkably similar to the results of the one-dimensional full-wave analysis for a wide range of turbulent correlation lengths. Implications for the interpretation of correlation reflectometer measurements in fusion plasma are discussed

  5. Total and Direct Correlation Function Integrals from Molecular Simulation of Binary Systems

    DEFF Research Database (Denmark)

    Wedberg, Nils Hejle Rasmus Ingemar; O’Connell, John P.; Peters, Günther H.J.

    2011-01-01

    The possibility for obtaining derivative properties for mixtures from integrals of spatial total and direct correlation functions obtained from molecular dynamics simulations is explored. Theoretically well-supported methods are examined to extend simulation radial distribution functions to long...... are consistent with an excess Helmholtz energy model fitted to available simulations. In addition, simulations of water/methanol and water/t-butanol mixtures have been carried out. The method yields results for partial molar volumes, activity coefficient derivatives, and individual correlation function integrals...... in reasonable agreement with smoothed experimental data. The proposed method for obtaining correlation function integrals is shown to perform at least as well as or better than two previously published approaches....

  6. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  7. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  8. Hydrogen Epoch of Reinozation Array (HERA) Calibrated FFT Correlator Simulation

    Science.gov (United States)

    Salazar, Jeffrey David; Parsons, Aaron

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) project is an astronomical radio interferometer array with a redundant baseline configuration. Interferometer arrays are being used widely in radio astronomy because they have a variety of advantages over single antenna systems. For example, they produce images (visibilities) closely matching that of a large antenna (such as the Arecibo observatory), while both the hardware and maintenance costs are significantly lower. However, this method has some complications; one being the computational cost of correlating data from all of the antennas. A correlator is an electronic device that cross-correlates the data between the individual antennas; these are what radio astronomers call visibilities. HERA, being in its early stages, utilizes a traditional correlator system. The correlator cost scales as N2, where N is the number of antennas in the array. The purpose of a redundant baseline configuration array setup is for the use of a more efficient Fast Fourier Transform (FFT) correlator. FFT correlators scale as Nlog2N. The data acquired from this sort of setup, however, inherits geometric delay and uncalibrated antenna gains. This particular project simulates the process of calibrating signals from astronomical sources. Each signal “received” by an antenna in the simulation is given random antenna gain and geometric delay. The “linsolve” Python module was used to solve for the unknown variables in the simulation (complex gains and delays), which then gave a value for the true visibilities. This first version of the simulation only mimics a one dimensional redundant telescope array detecting a small amount of sources located in the volume above the antenna plane. Future versions, using GPUs, will handle a two dimensional redundant array of telescopes detecting a large amount of sources in the volume above the array.

  9. Angular correlation methods

    International Nuclear Information System (INIS)

    Ferguson, A.J.

    1974-01-01

    An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)

  10. Image correlation method for DNA sequence alignment.

    Science.gov (United States)

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  11. Generalized canonical correlation analysis of matrices with missing rows : A simulation study

    NARCIS (Netherlands)

    van de Velden, Michel; Bijmolt, Tammo H. A.

    A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll's (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the

  12. High correlation between performance on a virtual-reality simulator and real-life cataract surgery

    DEFF Research Database (Denmark)

    Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif

    2017-01-01

    -tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). CONCLUSION: Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data......PURPOSE: To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. METHODS: Cataract surgeons with varying levels of experience were included in the study. All...... antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. RESULTS: Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion...

  13. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  14. Simulating Optical Correlation on a Digital Image Processing

    Science.gov (United States)

    Denning, Bryan

    1998-04-01

    Optical Correlation is a useful tool for recognizing objects in video scenes. In this paper, we explore the characteristics of a composite filter known as the equal correlation peak synthetic discriminant function (ECP SDF). Although the ECP SDF is commonly used in coherent optical correlation systems, the authors simulated the operation of a correlator using an EPIX frame grabber/image processor board to complete this work. Issues pertaining to simulating correlation using an EPIX board will be discussed. Additionally, the ability of the ECP SDF to detect objects that have been subjected to inplane rotation and small scale changes will be addressed by correlating filters against true-class objects placed randomly within a scene. To test the robustness of the filters, the results of correlating the filter against false-class objects that closely resemble the true class will also be presented.

  15. Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals

    Directory of Open Access Journals (Sweden)

    H. H. Chen

    2012-06-01

    Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.

  16. Estimation of velocity vector angles using the directional cross-correlation method

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt

    2006-01-01

    and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and a circulating flow rig with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions...... transducer is used with a normal transmission of a focused ultrasound field. In the simulations the relative standard deviation of the velocity magnitude is between 0.7% and 7.7% for flow angles between 45 deg and 90 deg. The study showed that angle estimation by directional beamforming can be estimated...

  17. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  18. Mott Transition In Strongly Correlated Materials: Many-Body Methods And Realistic Materials Simulations

    Science.gov (United States)

    Lee, Tsung-Han

    Strongly correlated materials are a class of materials that cannot be properly described by the Density Functional Theory (DFT), which is a single-particle approximation to the original many-body electronic Hamiltonian. These systems contain d or f orbital electrons, i.e., transition metals, actinides, and lanthanides compounds, for which the electron-electron interaction (correlation) effects are too strong to be described by the single-particle approximation of DFT. Therefore, complementary many-body methods have been developed, at the model Hamiltonians level, to describe these strong correlation effects. Dynamical Mean Field Theory (DMFT) and Rotationally Invariant Slave-Boson (RISB) approaches are two successful methods that can capture the correlation effects for a broad interaction strength. However, these many-body methods, as applied to model Hamiltonians, treat the electronic structure of realistic materials in a phenomenological fashion, which only allow to describe their properties qualitatively. Consequently, the combination of DFT and many body methods, e.g., Local Density Approximation augmented by RISB and DMFT (LDA+RISB and LDA+DMFT), have been recently proposed to combine the advantages of both methods into a quantitative tool to analyze strongly correlated systems. In this dissertation, we studied the possible improvements of these approaches, and tested their accuracy on realistic materials. This dissertation is separated into two parts. In the first part, we studied the extension of DMFT and RISB in three directions. First, we extended DMFT framework to investigate the behavior of the domain wall structure in metal-Mott insulator coexistence regime by studying the unstable solution describing the domain wall. We found that this solution, differing qualitatively from both the metallic and the insulating solutions, displays an insulating-like behavior in resistivity while carrying a weak metallic character in its electronic structure. Second, we

  19. Methods of channeling simulation

    International Nuclear Information System (INIS)

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs

  20. a Task-Oriented Disaster Information Correlation Method

    Science.gov (United States)

    Linyao, Q.; Zhiqiang, D.; Qing, Z.

    2015-07-01

    With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.

  1. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  2. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  3. On the boundary conditions and optimization methods in integrated digital image correlation

    NARCIS (Netherlands)

    Kleinendorst, S.M.; Verhaegh, B.J.; Hoefnagels, J.P.M.; Ruybalid, A.; van der Sluis, O.; Geers, M.G.D.; Lamberti, L.; Lin, M.-T.; Furlong, C.; Sciammarella, C.

    2018-01-01

    In integrated digital image correlation (IDIC) methods attention must be paid to the influence of using a correct geometric and material model, but also to make the boundary conditions in the FE simulation match the real experiment. Another issue is the robustness and convergence of the IDIC

  4. Isothermal-isobaric Nose-Hoover method application: correlation length and disclinations per particle

    International Nuclear Information System (INIS)

    Morales, J.J.; Nuevo, J.M.; Rull, L.F.

    1987-01-01

    The new isothermic-isobaric MD(T,p,N) method of Nose and Hoover is applied in Molecular Dynamics simulations to both liquid and solid near the phase transition. We tested for an appropriate value of the isobaric friction coefficient before calculating the correlation length in the liquid and the disclinations per particle in solid on a big system of 2304 particles. The results are compared with those obtained by traditional MD simulation (E,V,N). (author)

  5. Research on neutron noise analysis stochastic simulation method for α calculation

    International Nuclear Information System (INIS)

    Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang

    2014-01-01

    The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)

  6. Determination of velocity vector angles using the directional cross-correlation method

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt

    2005-01-01

    and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array transducer is used......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions......-time ) between signals to correlate, and a proper choice varies with flow angle and flow velocity. One performance example is given with a fixed value of k tprf for all flow angles. The angle estimation on measured data for flow at 60 ◦ to 90 ◦ , yields a probability of valid estimates between 68% and 98...

  7. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  8. [Lack of correlation between performances in a simulator and in reality].

    Science.gov (United States)

    Konge, Lars; Bitsch, Mikael

    2010-12-13

    Simulation-based training provides obvious benefits for patients and doctors in education. Frequently, virtual reality simulators are expensive and evidence for their efficacy is poor, particularly as a result of studies with poor methodology and few test participants. In medical simulated training- and evaluation programmes it is always a question of transfer to the real clinical world. To illustrate this problem a study comparing the test performance of persons on a bowling simulator with their performance in a real bowling alley was conducted. Twenty-five test subjects played two rounds of bowling on a Nintendo Wii and 25 days later on a real bowling alley. Correlations of the scores in the first and second round (test-retest-reliability) and of the scores on the simulator and in reality (criterion validation) were studied and there was tested for any difference between female and male performance. The intraclass correlation coefficient equalled 0.76, i.e. the simulator fairly accurately measured participant performance. In contrast to this there was absolutely no correlation between participants' real bowling abilities and their scores on the simulator (Pearson's r = 0.06). There was no significant difference between female and male abilities. Simulation-based testing and training must be based on evidence. More studies are needed to include an adequate number of subjects. Bowling competence should not be based on Nintendo Wii measurements. Simulated training- and evaluation programmes should be validated before introduction, to ensure consistency with the real world.

  9. Quantum simulation of strongly correlated condensed matter systems

    Science.gov (United States)

    Hofstetter, W.; Qin, T.

    2018-04-01

    We review recent experimental and theoretical progress in realizing and simulating many-body phases of ultracold atoms in optical lattices, which gives access to analog quantum simulations of fundamental model Hamiltonians for strongly correlated condensed matter systems, such as the Hubbard model. After a general introduction to quantum gases in optical lattices, their preparation and cooling, and measurement techniques for relevant observables, we focus on several examples, where quantum simulations of this type have been performed successfully during the past years: Mott-insulator states, itinerant quantum magnetism, disorder-induced localization and its interplay with interactions, and topological quantum states in synthetic gauge fields.

  10. The correlation functions of hard-sphere chain fluids: Comparison of the Wertheim integral equation theory with the Monte Carlo simulation

    International Nuclear Information System (INIS)

    Chang, J.; Sandler, S.I.

    1995-01-01

    The correlation functions of homonuclear hard-sphere chain fluids are studied using the Wertheim integral equation theory for associating fluids and the Monte Carlo simulation method. The molecular model used in the simulations is the freely jointed hard-sphere chain with spheres that are tangentially connected. In the Wertheim theory, such a chain molecule is described by sticky hard spheres with two independent attraction sites on the surface of each sphere. The OZ-like equation for this associating fluid is analytically solved using the polymer-PY closure and by imposing a single bonding condition. By equating the mean chain length of this associating hard sphere fluid to the fixed length of the hard-sphere chains used in simulation, we find that the correlation functions for the chain fluids are accurately predicted. From the Wertheim theory we also obtain predictions for the overall correlation functions that include intramolecular correlations. In addition, the results for the average intermolecular correlation functions from the Wertheim theory and from the Chiew theory are compared with simulation results, and the differences between these theories are discussed

  11. Correlations between technical skills and behavioral skills in simulated neonatal resuscitations.

    Science.gov (United States)

    Sawyer, T; Leonard, D; Sierocka-Castaneda, A; Chan, D; Thompson, M

    2014-10-01

    Neonatal resuscitation requires both technical and behavioral skills. Key behavioral skills in neonatal resuscitation have been identified by the Neonatal Resuscitation Program. Correlations and interactions between technical skills and behavioral skills in neonatal resuscitation were investigated. Behavioral skills were evaluated via blinded video review of 45 simulated neonatal resuscitations using a validated assessment tool. These were statistically correlated with previously obtained technical skill performance data. Technical skills and behavioral skills were strongly correlated (ρ=0.48; P=0.001). The strongest correlations were seen in distribution of workload (ρ=0.60; P=0.01), utilization of information (ρ=0.55; P=0.03) and utilization of resources (ρ=0.61; P=0.01). Teams with superior behavioral skills also demonstrated superior technical skills, and vice versa. Technical and behavioral skills were highly correlated during simulated neonatal resuscitations. Individual behavioral skill correlations are likely dependent on both intrinsic and extrinsic factors.

  12. Degeneracy and long-range correlation: A simulation study

    Directory of Open Access Journals (Sweden)

    Marmelat Vivien

    2011-12-01

    Full Text Available We present in this paper a simulation study that aimed at evidencing a causal relationship between degeneracy and long-range correlations. Long-range correlations represent a very specific form of fluctuations that have been evidenced in the outcomes time series produced by a number of natural systems. Long-range correlations are supposed to sign the complexity, adaptability and flexibility of the system. Degeneracy is defined as the ability of elements that are structurally different to perform the same function, and is presented as a key feature for explaining the robustness of complex systems. We propose a model able to generate long-range correlated series, and including a parameter that account for degeneracy. Results show that a decrease in degeneracy tends to reduce the strength of long-range correlation in the series produced by the model.

  13. Pair Correlation Function Integrals

    DEFF Research Database (Denmark)

    Wedberg, Nils Hejle Rasmus Ingemar; O'Connell, John P.; Peters, Günther H.J.

    2011-01-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long......-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O’Connell, G. H. Peters......, and J. Abildskov, Mol. Simul. 36, 1243 (2010); Fluid Phase Equilib. 302, 32 (2011)], but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report...

  14. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  15. A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.

    Science.gov (United States)

    Li, Ling; Yao, Dezhong

    2007-01-01

    It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.

  16. Simulating quantum correlations as a distributed sampling problem

    International Nuclear Information System (INIS)

    Degorre, Julien; Laplante, Sophie; Roland, Jeremie

    2005-01-01

    It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states

  17. Quantum control with NMR methods: Application to quantum simulations

    International Nuclear Information System (INIS)

    Negrevergne, Camille

    2002-01-01

    Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr

  18. Quantifying the nonlocality of Greenberger-Horne-Zeilinger quantum correlations by a bounded communication simulation protocol.

    Science.gov (United States)

    Branciard, Cyril; Gisin, Nicolas

    2011-07-08

    The simulation of quantum correlations with finite nonlocal resources, such as classical communication, gives a natural way to quantify their nonlocality. While multipartite nonlocal correlations appear to be useful resources, very little is known on how to simulate multipartite quantum correlations. We present a protocol that reproduces tripartite Greenberger-Horne-Zeilinger correlations with bounded communication: 3 bits in total turn out to be sufficient to simulate all equatorial Von Neumann measurements on the tripartite Greenberger-Horne-Zeilinger state.

  19. Numerical simulations of topological and correlated quantum matter

    Energy Technology Data Exchange (ETDEWEB)

    Assaad, Fakher F. [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik

    2016-11-01

    The complexity of the solid state does not allow us to carry out simulations of correlated materials without adopting approximation schemes. In this project we are tackling this daunting task with complementary techniques. On one hand one can start with density functional theory in the local density approximation and then add dynamical local interactions using the so called dynamical mean-field approximation. This approach has the merit of being material dependent in the sense that it is possible to include the specific chemical constituents of the material under investigation. Progress in this domain is described below. Another venue is to concentrate on phenomena occurring in a class of materials. Here, the strategy is to define models which one can simulate in polynomial time on supercomputing architectures, and which reproduce the phenomena under investigation. This route has been remarkably successful, and we are now in a position to provide controlled model calculations which can cope with antiferromagnetic fluctuations in metals, or nematic instabilities of fermi liquids. Both phenomena are crucial for our understanding of high temperature superconductivity in the cuprates and the pnictides. Access to the LRZ supercomputing center was imperative during the current grant period to do the relevant simulations on a wide range of topics on correlated electrons. In all cases access to supercomputing facilities allows to carry out simulations on larger and larger system sizes so as to be able to extrapolate to the thermodynamic limit relevant for the understanding of experiments and collective phenomena.

  20. Feasibility of the correlation curves method in calorimeters of different types

    OpenAIRE

    Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.

    2014-01-01

    The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...

  1. An efficient sensitivity analysis method for modified geometry of Macpherson suspension based on Pearson correlation coefficient

    Science.gov (United States)

    Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh

    2017-06-01

    The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

  2. Universal Generating Function Based Probabilistic Production Simulation Approach Considering Wind Speed Correlation

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-11-01

    Full Text Available Due to the volatile and correlated nature of wind speed, a high share of wind power penetration poses challenges to power system production simulation. Existing power system probabilistic production simulation approaches are in short of considering the time-varying characteristics of wind power and load, as well as the correlation between wind speeds at the same time, which brings about some problems in planning and analysis for the power system with high wind power penetration. Based on universal generating function (UGF, this paper proposes a novel probabilistic production simulation approach considering wind speed correlation. UGF is utilized to develop the chronological models of wind power that characterizes wind speed correlation simultaneously, as well as the chronological models of conventional generation sources and load. The supply and demand are matched chronologically to not only obtain generation schedules, but also reliability indices both at each simulation interval and the whole period. The proposed approach has been tested on the improved IEEE-RTS 79 test system and is compared with the Monte Carlo approach and the sequence operation theory approach. The results verified the proposed approach with the merits of computation simplicity and accuracy.

  3. A three-dimensional correlation method for registration of medical images in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Georgiou, Michalakis; Sfakianakis, George N [Department of Radiology, University of Miami, Jackson Memorial Hospital, Miami, FL 33136 (United States); Nagel, Joachim H [Institute of Biomedical Engineering, University of Stuttgart, Stuttgart 70174 (Germany)

    1999-12-31

    The availability of methods to register multi-modality images in order to `fuse` them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as {sup p}olar Shells{sup .} The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors) 6 refs., 3 figs.

  4. A three-dimensional correlation method for registration of medical images in radiology

    International Nuclear Information System (INIS)

    Georgiou, Michalakis; Sfakianakis, George N.; Nagel, Joachim H.

    1998-01-01

    The availability of methods to register multi-modality images in order to 'fuse' them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as p olar Shells . The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors)

  5. Partial correlation analysis method in ultrarelativistic heavy-ion collisions

    Science.gov (United States)

    Olszewski, Adam; Broniowski, Wojciech

    2017-11-01

    We argue that statistical data analysis of two-particle longitudinal correlations in ultrarelativistic heavy-ion collisions may be efficiently carried out with the technique of partial covariance. In this method, the spurious event-by-event fluctuations due to imprecise centrality determination are eliminated via projecting out the component of the covariance influenced by the centrality fluctuations. We bring up the relationship of the partial covariance to the conditional covariance. Importantly, in the superposition approach, where hadrons are produced independently from a collection of sources, the framework allows us to impose centrality constraints on the number of sources rather than hadrons, that way unfolding of the trivial fluctuations from statistical hadronization and focusing better on the initial-state physics. We show, using simulated data from hydrodynamics followed with statistical hadronization, that the technique is practical and very simple to use, giving insight into the correlations generated in the initial stage. We also discuss the issues related to separation of the short- and long-range components of the correlation functions and show that in our example the short-range component from the resonance decays is largely reduced by considering pions of the same sign. We demonstrate the method explicitly on the cases where centrality is determined with a single central control bin or with two peripheral control bins.

  6. Methods for simulating turbulent phase screen

    International Nuclear Information System (INIS)

    Zhang Jianzhu; Zhang Feizhou; Wu Yi

    2012-01-01

    Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)

  7. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  8. Fast methods for spatially correlated multilevel functional data

    KAUST Repository

    Staicu, A.-M.

    2010-01-19

    We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.

  9. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    Science.gov (United States)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  10. Correlations between Clinical Judgement and Learning Style Preferences of Nursing Students in the Simulation Room

    Science.gov (United States)

    Hallin, Karin; Häggström, Marie; Bäckström, Britt; Kristiansen, Lisbeth Porskrog

    2016-01-01

    Background: Health care educators account for variables affecting patient safety and are responsible for developing the highly complex process of education planning. Clinical judgement is a multidimensional process, which may be affected by learning styles. The aim was to explore three specific hypotheses to test correlations between nursing students’ team achievements in clinical judgement and emotional, sociological and physiological learning style preferences. Methods: A descriptive cross-sectional study was conducted with Swedish university nursing students in 2012-2013. Convenience sampling was used with 60 teams with 173 nursing students in the final semester of a three-year Bachelor of Science in nursing programme. Data collection included questionnaires of personal characteristics, learning style preferences, determined by the Dunn and Dunn Productivity Environmental Preference Survey, and videotaped complex nursing simulation scenarios. Comparison with Lasater Clinical Judgement Rubric and Non-parametric analyses were performed. Results: Three significant correlations were found between the team achievements and the students’ learning style preferences: significant negative correlation with ‘Structure’ and ‘Kinesthetic’ at the individual level, and positive correlation with the ‘Tactile’ variable. No significant correlations with students’ ‘Motivation’, ‘Persistence’, ‘Wish to learn alone’ and ‘Wish for an authoritative person present’ were seen. Discussion and Conclusion: There were multiple complex interactions between the tested learning style preferences and the team achievements of clinical judgement in the simulation room, which provides important information for the becoming nurses. Several factors may have influenced the results that should be acknowledged when designing further research. We suggest conducting mixed methods to determine further relationships between team achievements, learning style preferences

  11. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    Science.gov (United States)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  12. Magnetic Flyer Facility Correlation and UGT Simulation

    Science.gov (United States)

    1978-05-01

    assistance in this program from the following: Southern Research Institute - Material properties and C. Pears and G. Fornaro damage data Air Force ...techniques - flyer plate loading. The program was divided into two majur parts, the Facility Correlation Study and the UGT Simulation STudy. For the...current produces a magnetic field which then produces an accelerating force on the flyer plate, itself a current carry- ing part of the circuit. The flyer

  13. Measuring decision weights in recognition experiments with multiple response alternatives: comparing the correlation and multinomial-logistic-regression methods.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2012-11-01

    Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.

  14. Correlated prompt fission data in transport simulations

    Science.gov (United States)

    Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.

    2018-01-01

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in

  15. Correlated prompt fission data in transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P.; Jaffke, P.; Kawano, T.; Stetcu, I. [Los Alamos National Laboratory, Nuclear Physics Group, Theoretical Division, Los Alamos, NM (United States); Vogt, R. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); University of California, Physics Department, Davis, CA (United States); Randrup, J. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Rising, M.E.; Andrews, M.T.; Sood, A. [Los Alamos National Laboratory, Monte Carlo Methods, Codes, and Applications Group, Los Alamos, NM (United States); Pozzi, S.A.; Clarke, S.D.; Marcath, M.J. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, MI (United States); Verbeke, J.; Nakae, L. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); Jandel, M. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States); University of Massachusetts, Department of Physics and Applied Physics, Lowell, MA (United States); Meierbachtol, K. [Los Alamos National Laboratory, Nuclear Engineering and Nonproliferation, Los Alamos, NM (United States); Rusev, G.; Walker, C. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States)

    2018-01-15

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n-n, n-γ, and γ-γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX-PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation

  16. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    Science.gov (United States)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  17. Numerical simulation of jet breakup behavior by the lattice Boltzmann method

    International Nuclear Information System (INIS)

    Matsuo, Eiji; Koyama, Kazuya; Abe, Yutaka; Iwasawa, Yuzuru; Ebihara, Ken-ichi

    2015-01-01

    In order to understand the jet breakup behavior of the molten core material into coolant during a core disruptive accident (CDA) for a sodium-cooled fast reactor (SFR), we simulated the jet breakup due to the hydrodynamic interaction using the lattice Boltzmann method (LBM). The applicability of the LBM to the jet breakup simulation was validated by comparison with our experimental data. In addition, the influence of several dimensionless numbers such as Weber number and Froude number was examined using the LBM. As a result, we validated applicability of the LBM to the jet breakup simulation, and found that the jet breakup length is independent of Froude number and in good agreement with the Epstein's correlation when the jet interface becomes unstable. (author)

  18. Turbulent flow and temperature noise simulation by a multiparticle Monte Carlo method

    International Nuclear Information System (INIS)

    Hughes, G.; Overton, R.S.

    1980-10-01

    A statistical method of simulating real-time temperature fluctuations in liquid sodium pipe flow, for potential application to the estimation of temperature signals generated by subassembly blockages in LMFBRs is described. The method is based on the empirical characterisation of the flow by turbulence intensity and macroscale, radial velocity correlations and spectral form. These are used to produce realisations of the correlated motion of successive batches of representative 'marker particles' released at discrete time intervals into the flow. Temperature noise is generated by the radial mixing of the particles as they move downstream from an assumed mean temperature profile, where they acquire defined temperatures. By employing multi-particle batches, it is possible to perform radial heat transfer calculations, resulting in axial dissipation of the temperature noise levels. A simulated temperature-time signal is built up by recording the temperature at a given point in the flow as each batch of particles reaches the radial measurement plane. This is an advantage over conventional techniques which can usually only predict time-averaged parameters. (U.K.)

  19. Gradient Correlation Method for the Stabilization of Inversion Results of Aerosol Microphysical Properties Retrieved from Profiles of Optical Data

    Directory of Open Access Journals (Sweden)

    Kolgotin Alexei

    2016-01-01

    Full Text Available Correlation relationships between aerosol microphysical parameters and optical data are investigated. The results show that surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99 for arbitrary particle size distribution. The correlation relationships that we obtained can be used as constraints in our inversion of optical lidar data. Simulation studies demonstrate a significant stabilization of aerosol microphysical data products if we apply the gradient correlation method in our traditional regularization technique.

  20. Prediction of periodically correlated processes by wavelet transform and multivariate methods with applications to climatological data

    Science.gov (United States)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2015-05-01

    This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.

  1. Numerical Simulations of Microseisms in a NE Atlantic 3D Geological Model, using a Spectral-Element Method

    Science.gov (United States)

    Ying, Yingzi; Bean, Christopher J.

    2014-05-01

    Ocean-generated microseisms are faint Earth tremors associated with the interaction between ocean water waves and the solid Earth. The microseism noise recorded as low frequency ground vibrations by seismometers contains significant information about the Earth's interior and the sea states. In this work, we first aim to investigate the forward propagation of microseisms in a deep-ocean environment. We employ a 3D North-East Atlantic geological model and simulate wave propagation in a coupled fluid-solid domain, using a spectral-element method. The aim is to investigate the effects of the continental shelf on microseism wave propagation. A second goal of this work is to perform noise simulation to calculate synthetic ensemble averaged cross-correlations of microseism noise signals with time reversal method. The algorithm can relieve computational cost by avoiding time stacking and get cross-correlations between the designated master station and all the remaining slave stations, at one time. The origins of microseisms are non-uniform, so we also test the effect of simulated noise source distribution on the determined cross-correlations.

  2. GEM simulation methods development

    International Nuclear Information System (INIS)

    Tikhonov, V.; Veenhof, R.

    2002-01-01

    A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment

  3. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  4. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-01-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  5. Detector Simulation: Data Treatment and Analysis Methods

    CERN Document Server

    Apostolakis, J

    2011-01-01

    Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...

  6. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  7. Increasing the computational efficient of digital cross correlation by a vectorization method

    Science.gov (United States)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  8. Patient positioning method based on binary image correlation between two edge images for proton-beam radiation therapy

    International Nuclear Information System (INIS)

    Sawada, Akira; Yoda, Kiyoshi; Numano, Masumi; Futami, Yasuyuki; Yamashita, Haruo; Murayama, Shigeyuki; Tsugami, Hironobu

    2005-01-01

    A new technique based on normalized binary image correlation between two edge images has been proposed for positioning proton-beam radiotherapy patients. A Canny edge detector was used to extract two edge images from a reference x-ray image and a test x-ray image of a patient before positioning. While translating and rotating the edged test image, the absolute value of the normalized binary image correlation between the two edge images is iteratively maximized. Each time before rotation, dilation is applied to the edged test image to avoid a steep reduction of the image correlation. To evaluate robustness of the proposed method, a simulation has been carried out using 240 simulated edged head front-view images extracted from a reference image by varying parameters of the Canny algorithm with a given range of rotation angles and translation amounts in x and y directions. It was shown that resulting registration errors have an accuracy of one pixel in x and y directions and zero degrees in rotation, even when the number of edge pixels significantly differs between the edged reference image and the edged simulation image. Subsequently, positioning experiments using several sets of head, lung, and hip data have been performed. We have observed that the differences of translation and rotation between manual positioning and the proposed method were within one pixel in translation and one degree in rotation. From the results of the validation study, it can be concluded that a significant reduction in workload for the physicians and technicians can be achieved with this method

  9. Correlations Between Clinical Judgement and Learning Style Preferences of Nursing Students in the Simulation Room.

    Science.gov (United States)

    Hallin, Karin; Haggstrom, Marie; Backstrom, Britt; Kristiansen, Lisbeth Porskrog

    2015-09-28

    Health care educators account for variables affecting patient safety and are responsible for developing the highly complex process of education planning. Clinical judgement is a multidimensional process, which may be affected by learning styles. The aim was to explore three specific hypotheses to test correlations between nursing students' team achievements in clinical judgement and emotional, sociological and physiological learning style preferences. A descriptive cross-sectional study was conducted with Swedish university nursing students in 2012-2013. Convenience sampling was used with 60 teams with 173 nursing students in the final semester of a three-year Bachelor of Science in nursing programme. Data collection included questionnaires of personal characteristics, learning style preferences, determined by the Dunn and Dunn Productivity Environmental Preference Survey, and videotaped complex nursing simulation scenarios. Comparison with Lasater Clinical Judgement Rubric and Non-parametric analyses were performed. Three significant correlations were found between the team achievements and the students' learning style preferences: significant negative correlation with 'Structure' and 'Kinesthetic' at the individual level, and positive correlation with the 'Tactile' variable. No significant correlations with students' 'Motivation', 'Persistence', 'Wish to learn alone' and 'Wish for an authoritative person present' were seen. There were multiple complex interactions between the tested learning style preferences and the team achievements of clinical judgement in the simulation room, which provides important information for the becoming nurses. Several factors may have influenced the results that should be acknowledged when designing further research. We suggest conducting mixed methods to determine further relationships between team achievements, learning style preferences, cognitive learning outcomes and group processes.

  10. Multi-scaled normal mode analysis method for dynamics simulation of protein-membrane complexes: A case study of potassium channel gating motion correlations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaokun; Han, Min; Ming, Dengming, E-mail: dming@fudan.edu.cn [Department of Physiology and Biophysics, School of Life Sciences, Fudan University, Shanghai (China)

    2015-10-07

    Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.

  11. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  12. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  13. Total Correlation Function Integrals and Isothermal Compressibilities from Molecular Simulations

    DEFF Research Database (Denmark)

    Wedberg, Rasmus; Peters, Günther H.j.; Abildskov, Jens

    2008-01-01

    Generation of thermodynamic data, here compressed liquid density and isothermal compressibility data, using molecular dynamics simulations is investigated. Five normal alkane systems are simulated at three different state points. We compare two main approaches to isothermal compressibilities: (1...... in approximately the same amount of time. This suggests that computation of total correlation function integrals is a route to isothermal compressibility, as accurate and fast as well-established benchmark techniques. A crucial step is the integration of the radial distribution function. To obtain sensible results...

  14. New methods in plasma simulation

    International Nuclear Information System (INIS)

    Mason, R.J.

    1990-01-01

    The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs

  15. Nuclear material enrichment identification method based on cross-correlation and high order spectra

    International Nuclear Information System (INIS)

    Yang Fan; Wei Biao; Feng Peng; Mi Deling; Ren Yong

    2013-01-01

    In order to enhance the sensitivity of nuclear material identification system (NMIS) against the change of nuclear material enrichment, the principle of high order statistic feature is introduced and applied to traditional NMIS. We present a new enrichment identification method based on cross-correlation and high order spectrum algorithm. By applying the identification method to NMIS, the 3D graphs with nuclear material character are presented and can be used as new signatures to identify the enrichment of nuclear materials. The simulation result shows that the identification method could suppress the background noises, electronic system noises, and improve the sensitivity against enrichment change to exponential order with no system structure modification. (authors)

  16. Posterior Tibial Slope Angle Correlates With Peak Sagittal and Frontal Plane Knee Joint Loading During Robotic Simulations of Athletic Tasks

    Science.gov (United States)

    Bates, Nathaniel A.; Nesbitt, Rebecca J.; Shearn, Jason T.; Myer, Gregory D.; Hewett, Timothy E.

    2017-01-01

    Background Tibial slope angle is a nonmodifiable risk factor for anterior cruciate ligament (ACL) injury. However, the mechanical role of varying tibial slopes during athletic tasks has yet to be clinically quantified. Purpose To examine the influence of posterior tibial slope on knee joint loading during controlled, in vitro simulation of the knee joint articulations during athletic tasks. Study Design Descriptive laboratory study. Methods A 6 degree of freedom robotic manipulator positionally maneuvered cadaveric knee joints from 12 unique specimens with varying tibial slopes (range, −7.7° to 7.7°) through drop vertical jump and sidestep cutting tasks that were derived from 3-dimensional in vivo motion recordings. Internal knee joint torques and forces were recorded throughout simulation and were linearly correlated with tibial slope. Results The mean (6SD) posterior tibial slope angle was 2.2° ± 4.3° in the lateral compartment and 2.3° ± 3.3° in the medial compartment. For simulated drop vertical jumps, lateral compartment tibial slope angle expressed moderate, direct correlations with peak internally generated knee adduction (r = 0.60–0.65), flexion (r = 0.64–0.66), lateral (r = 0.57–0.69), and external rotation torques (r = 0.47–0.72) as well as inverse correlations with peak abduction (r = −0.42 to −0.61) and internal rotation torques (r = −0.39 to −0.79). Only frontal plane torques were correlated during sidestep cutting simulations. For simulated drop vertical jumps, medial compartment tibial slope angle expressed moderate, direct correlations with peak internally generated knee flexion torque (r = 0.64–0.69) and lateral knee force (r = 0.55–0.74) as well as inverse correlations with peak external torque (r = −0.34 to 20.67) and medial knee force (r = −0.58 to −0.59). These moderate correlations were also present during simulated sidestep cutting. Conclusion The investigation supported the theory that increased posterior

  17. Long range correlations, event simulation and parton percolation

    International Nuclear Information System (INIS)

    Pajares, C.

    2011-01-01

    We study the RHIC data on long range rapidity correlations, comparing their main trends with different string model simulations. Particular attention is paid to color percolation model and its similarities with color glass condensate. As both approaches corresponds, at high density, to a similar physical picture, both of them give rise to a similar behavior on the energy and the centrality of the main observables. Color percolation explains the transition from low density to high density.

  18. Finite element formulation for a digital image correlation method

    International Nuclear Information System (INIS)

    Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei

    2005-01-01

    A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust

  19. Collaborative simulation method with spatiotemporal synchronization process control

    Science.gov (United States)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  20. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  1. Isotope correlations for safeguards surveillance and accountancy methods

    International Nuclear Information System (INIS)

    Persiani, P.J.; Kalimullah.

    1982-01-01

    Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The ICT allows the verification of: fabricator's uranium and plutonium content specifications, shipper/receiver differences between fabricator output and reactor input, reactor plant inventory changes, reprocessing batch specifications and shipper/receiver differences between reactor output and reprocessing plant input. The investigation indicates that there exist predictable functional relationships (i.e. correlations) between isotopic concentrations over a range of burnup. Several cross-correlations serve to establish the initial fuel assembly-averaged compositions. The selection of the more effective correlations will depend not only on the level of reliability of ICT for verification, but also on the capability, accuracy and difficulty of developing measurement methods. The propagation of measurement errors through the correlations have been examined to identify the sensitivity of the isotope correlations to measurement errors, and to establish criteria for measurement accuracy in the development and selection of measurement methods. 6 figures, 3 tables

  2. Fungible Correlation Matrices: A Method for Generating Nonsingular, Singular, and Improper Correlation Matrices for Monte Carlo Research.

    Science.gov (United States)

    Waller, Niels G

    2016-01-01

    For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.

  3. A simulation training evaluation method for distribution network fault based on radar chart

    Directory of Open Access Journals (Sweden)

    Yuhang Xu

    2018-01-01

    Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.

  4. An improved method based on wavelet coefficient correlation to filter noise in Doppler ultrasound blood flow signals

    Science.gov (United States)

    Wan, Renzhi; Zu, Yunxiao; Shao, Lin

    2018-04-01

    The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.

  5. Simulation of granular and gas-solid flows using discrete element method

    Science.gov (United States)

    Boyalakuntla, Dhanunjay S.

    2003-10-01

    In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D

  6. Analysis method of high-order collective-flow correlations based on the concept of correlative degree

    International Nuclear Information System (INIS)

    Zhang Weigang

    2000-01-01

    Based on the concept of correlative degree, a new method of high-order collective-flow measurement is constructed, with which azimuthal correlations, correlations of final state transverse momentum magnitude and transverse correlations can be inspected respectively. Using the new method the contributions of the azimuthal correlations of particles distribution and the correlations of transverse momentum magnitude of final state particles to high-order collective-flow correlations are analyzed respectively with 4π experimental events for 1.2 A GeV Ar + BaI 2 collisions at the Bevalac stream chamber. Comparing with the correlations of transverse momentum magnitude, the azimuthal correlations of final state particles distribution dominate high-order collective-flow correlations in experimental samples. The contributions of correlations of transverse momentum magnitude of final state particles not only enhance the strength of the high-order correlations of particle group, but also provide important information for the measurement of the collectivity of collective flow within the more constraint district

  7. Correlation of FMISO simulations with pimonidazole-stained tumor xenografts: A question of O{sub 2} consumption?

    Energy Technology Data Exchange (ETDEWEB)

    Wack, L. J., E-mail: linda-jacqueline.wack@med.uni-tuebingen.de; Thorwarth, D. [Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); Mönnich, D. [Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); German Cancer Consortium (DKTK), Tübingen 72076 (Germany); German Cancer Research Center (DKFZ), Heidelberg 69121 (Germany); Yaromina, A. [OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden 01309, Germany and Department of Radiation Oncology (MAASTRO), GROW—School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht 6229 ET (Netherlands); Zips, D. [German Cancer Consortium (DKTK), Tübingen 72076 (Germany); German Cancer Research Center (DKFZ), Heidelberg 69121 (Germany); Department of Radiation Oncology, University Hospital Tübingen, Tübingen 72076 (Germany); and others

    2016-07-15

    Purpose: To compare a dedicated simulation model for hypoxia PET against tumor microsections stained for different parameters of the tumor microenvironment. The model can readily be adapted to a variety of conditions, such as different human head and neck squamous cell carcinoma (HNSCC) xenograft tumors. Methods: Nine different HNSCC tumor models were transplanted subcutaneously into nude mice. Tumors were excised and immunoflourescently labeled with pimonidazole, Hoechst 33342, and CD31, providing information on hypoxia, perfusion, and vessel distribution, respectively. Hoechst and CD31 images were used to generate maps of perfused blood vessels on which tissue oxygenation and the accumulation of the hypoxia tracer FMISO were mathematically simulated. The model includes a Michaelis–Menten relation to describe the oxygen consumption inside tissue. The maximum oxygen consumption rate M{sub 0} was chosen as the parameter for a tumor-specific optimization as it strongly influences tracer distribution. M{sub 0} was optimized on each tumor slice to reach optimum correlations between FMISO concentration 4 h postinjection and pimonidazole staining intensity. Results: After optimization, high pixel-based correlations up to R{sup 2} = 0.85 were found for individual tissue sections. Experimental pimonidazole images and FMISO simulations showed good visual agreement, confirming the validity of the approach. Median correlations per tumor model varied significantly (p < 0.05), with R{sup 2} ranging from 0.20 to 0.54. The optimum maximum oxygen consumption rate M{sub 0} differed significantly (p < 0.05) between tumor models, ranging from 2.4 to 5.2 mm Hg/s. Conclusions: It is feasible to simulate FMISO distributions that match the pimonidazole retention patterns observed in vivo. Good agreement was obtained for multiple tumor models by optimizing the oxygen consumption rate, M{sub 0}, whose optimum value differed significantly between tumor models.

  8. Multifractal temporally weighted detrended cross-correlation analysis to quantify power-law cross-correlation and its application to stock markets

    Science.gov (United States)

    Wei, Yun-Lan; Yu, Zu-Guo; Zou, Hai-Long; Anh, Vo

    2017-06-01

    A new method—multifractal temporally weighted detrended cross-correlation analysis (MF-TWXDFA)—is proposed to investigate multifractal cross-correlations in this paper. This new method is based on multifractal temporally weighted detrended fluctuation analysis and multifractal cross-correlation analysis (MFCCA). An innovation of the method is applying geographically weighted regression to estimate local trends in the nonstationary time series. We also take into consideration the sign of the fluctuations in computing the corresponding detrended cross-covariance function. To test the performance of the MF-TWXDFA algorithm, we apply it and the MFCCA method on simulated and actual series. Numerical tests on artificially simulated series demonstrate that our method can accurately detect long-range cross-correlations for two simultaneously recorded series. To further show the utility of MF-TWXDFA, we apply it on time series from stock markets and find that power-law cross-correlation between stock returns is significantly multifractal. A new coefficient, MF-TWXDFA cross-correlation coefficient, is also defined to quantify the levels of cross-correlation between two time series.

  9. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  10. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  11. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  12. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  13. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    Directory of Open Access Journals (Sweden)

    Jingjing He

    2017-09-01

    Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.

  14. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  15. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  16. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  17. Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation.

    Science.gov (United States)

    Quiroga-Lombard, Claudio S; Hass, Joachim; Durstewitz, Daniel

    2013-07-01

    Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then "slicing" spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.

  18. 2-d Simulations of Test Methods

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm

    2004-01-01

    One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...

  19. Nuclear spin measurement using the angular correlation method

    International Nuclear Information System (INIS)

    Schapira, J.-P.

    The double angular correlation method is defined by a semi-classical approach (Biendenharn). The equivalence formula in quantum mechanics are discussed for coherent and incoherent angular momentum mixing; the correlations are described from the density and efficiency matrices (Fano). The ambiguities in double angular correlations can be sometimes suppressed (emission of particles with a high orbital momentum l), using triple correlations between levels with well defined spin and parity. Triple correlations are applied to the case where the direction of linear polarization of γ-rays is detected [fr

  20. Hardware in the loop simulation of arbitrary magnitude shaped correlated radar clutter

    CSIR Research Space (South Africa)

    Strydom, JJ

    2014-10-01

    Full Text Available This paper describes a simple process for the generation of arbitrary probability distributions of complex data with correlation from sample to sample, optimized for hardware in the loop radar environment simulation. Measured radar clutter is used...

  1. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    Science.gov (United States)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  2. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  3. Evaluation of full-scope simulator testing methods

    International Nuclear Information System (INIS)

    Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs

  4. Evaluation of full-scope simulator testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.

  5. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  6. Methods employed to speed up Cathare for simulation uses

    International Nuclear Information System (INIS)

    Agator, J.M.

    1992-01-01

    This paper describes the main methods used to speed up the french advanced thermal-hydraulic computer code CATHARE and build a speedy version, called CATHARE-SIMU, adapted to real time calculations and simulation environment. Since CATHARE-SIMU, like CATHARE, uses a numerical scheme based on a fully implicit Newton's iterative method, and therefore with a variable time step, two ways have been explored to reduce the computing time: avoidance of short time steps, and so minimization of the number of iterations per time step, reduction of the computing time needed for an iteration. CATHARE-SIMU uses the same physical laws and correlations as in CATHARE with only some minor simplifications. This was considered the only way to be sure to maintain the level of physical relevance of CATHARE. Finally it is indicated that the validation programme of CATHARE-SIMU includes a set of 33 transient calculations, referring either to CATHARE for two-phase transients, or to measurements on real plants for operational transients

  7. Comparison Of Simulation Results When Using Two Different Methods For Mold Creation In Moldflow Simulation

    Directory of Open Access Journals (Sweden)

    Kaushikbhai C. Parmar

    2017-04-01

    Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.

  8. Combination of inquiry learning model and computer simulation to improve mastery concept and the correlation with critical thinking skills (CTS)

    Science.gov (United States)

    Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar

    2016-02-01

    Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.

  9. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  10. Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method

    Directory of Open Access Journals (Sweden)

    Francesco Pellicani

    2016-05-01

    Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.

  11. STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD

    OpenAIRE

    塩澤, 孝哉

    2015-01-01

    In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...

  12. Quantum Monte Carlo methods and strongly correlated electrons on honeycomb structures

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Thomas C.

    2010-12-16

    In this thesis we apply recently developed, as well as sophisticated quantum Monte Carlo methods to numerically investigate models of strongly correlated electron systems on honeycomb structures. The latter are of particular interest owing to their unique properties when simulating electrons on them, like the relativistic dispersion, strong quantum fluctuations and their resistance against instabilities. This work covers several projects including the advancement of the weak-coupling continuous time quantum Monte Carlo and its application to zero temperature and phonons, quantum phase transitions of valence bond solids in spin-1/2 Heisenberg systems using projector quantum Monte Carlo in the valence bond basis, and the magnetic field induced transition to a canted antiferromagnet of the Hubbard model on the honeycomb lattice. The emphasis lies on two projects investigating the phase diagram of the SU(2) and the SU(N)-symmetric Hubbard model on the hexagonal lattice. At sufficiently low temperatures, condensed-matter systems tend to develop order. An exception are quantum spin-liquids, where fluctuations prevent a transition to an ordered state down to the lowest temperatures. Previously elusive in experimentally relevant microscopic two-dimensional models, we show by means of large-scale quantum Monte Carlo simulations of the SU(2) Hubbard model on the honeycomb lattice, that a quantum spin-liquid emerges between the state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. This unexpected quantum-disordered state is found to be a short-range resonating valence bond liquid, akin to the one proposed for high temperature superconductors. Inspired by the rich phase diagrams of SU(N) models we study the SU(N)-symmetric Hubbard Heisenberg quantum antiferromagnet on the honeycomb lattice to investigate the reliability of 1/N corrections to large-N results by means of numerically exact QMC simulations. We study the melting of phases

  13. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua

    2012-01-01

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  14. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  15. Creation and Delphi-method refinement of pediatric disaster triage simulations.

    Science.gov (United States)

    Cicero, Mark X; Brown, Linda; Overly, Frank; Yarzebski, Jorge; Meckler, Garth; Fuchs, Susan; Tomassoni, Anthony; Aghababian, Richard; Chung, Sarita; Garrett, Andrew; Fagbuyi, Daniel; Adelgais, Kathleen; Goldman, Ran; Parker, James; Auerbach, Marc; Riera, Antonio; Cone, David; Baum, Carl R

    2014-01-01

    There is a need for rigorously designed pediatric disaster triage (PDT) training simulations for paramedics. First, we sought to design three multiple patient incidents for EMS provider training simulations. Our second objective was to determine the appropriate interventions and triage level for each victim in each of the simulations and develop evaluation instruments for each simulation. The final objective was to ensure that each simulation and evaluation tool was free of bias toward any specific PDT strategy. We created mixed-methods disaster simulation scenarios with pediatric victims: a school shooting, a school bus crash, and a multiple-victim house fire. Standardized patients, high-fidelity manikins, and low-fidelity manikins were used to portray the victims. Each simulation had similar acuity of injuries and 10 victims. Examples include children with special health-care needs, gunshot wounds, and smoke inhalation. Checklist-based evaluation tools and behaviorally anchored global assessments of function were created for each simulation. Eight physicians and paramedics from areas with differing PDT strategies were recruited as Subject Matter Experts (SMEs) for a modified Delphi iterative critique of the simulations and evaluation tools. The modified Delphi was managed with an online survey tool. The SMEs provided an expected triage category for each patient. The target for modified Delphi consensus was ≥85%. Using Likert scales and free text, the SMEs assessed the validity of the simulations, including instances of bias toward a specific PDT strategy, clarity of learning objectives, and the correlation of the evaluation tools to the learning objectives and scenarios. After two rounds of the modified Delphi, consensus for expected triage level was >85% for 28 of 30 victims, with the remaining two achieving >85% consensus after three Delphi iterations. To achieve consensus, we amended 11 instances of bias toward a specific PDT strategy and corrected 10

  16. Correlations between the simulated military tasks performance and physical fitness tests at high altitude

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    2017-11-01

    Full Text Available The aim of this study was to investigate the Correlations between the Simulated Military Tasks Performance and Physical Fitness Tests at high altitude. This research is part of a project to modernize the physical fitness test of the Colombian Army. Data collection was performed at the 13th Battalion of Instruction and Training, located 30km south of Bogota D.C., with a temperature range from 1ºC to 23ºC during the study period, and at 3100m above sea level. The sample was composed by 60 volunteers from three different platoons. The volunteers start the data collection protocol after 2 weeks of acclimation at this altitude. The main results were the identification of a high positive correlation between the 3 Assault wall in succession and the Simulated Military Tasks performance (r = 0.764, p<0.001, and a moderate negative correlation between pull-ups and the Simulated Military Tasks performance (r = -0.535, p<0.001. It can be recommended the use of the 20-consecutive overtaking of the 3 Assault wall in succession as a good way to estimate the performance in operational tasks which involve: assault walls, network of wires, military Climbing Nets, Tarzan jump among others, at high altitude.

  17. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  18. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  19. Solving Langevin equation with the stochastic algebraically correlated noise

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Srokowski, T.

    1996-01-01

    Long time tail in the velocity and force autocorrelation function has been found recently in the molecular dynamics simulations of the peripheral collisions of ions. Simulation of those slowly decaying correlations in the stochastic transport theory requires the development of new methods of generating stochastic force of arbitrarily long correlation times. The Markovian process and the multidimensional Kangaroo process which permit describing various algebraic correlated stochastic processes are proposed. (author)

  20. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  1. Multisite stochastic simulation of daily precipitation from copula modeling with a gamma marginal distribution

    Science.gov (United States)

    Lee, Taesam

    2018-05-01

    Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.

  2. Molecular dynamics simulations of Gay-Berne nematic liquid crystal: Elastic properties from direct correlation functions

    International Nuclear Information System (INIS)

    Stelzer, J.; Trebin, H.R.; Longa, L.

    1994-08-01

    We report NVT and NPT molecular dynamics simulations of a Gay-Berne nematic liquid crystal using generalization of recently proposed algorithm by Toxvaerd [Phys. Rev. E47, 343, 1993]. On the basis of these simulations the Oseen-Zoher-Frank elastic constants K 11 , K 22 and K 33 as well as the surface constants K 13 and K 24 have been calculated within the framework of the direct correlation function approach of Lipkin et al. [J. Chem. Phys. 82, 472 (1985)]. The angular coefficients of the direct pair correlation function, which enter the final formulas, have been determined from the computer simulation data for the pair correlation function of the nematic by combining the Ornstein-Zernike relation and the Wienier-Hopf factorization scheme. The unoriented nematic approximation has been assumed when constructing the reference, isotropic state of Lipkin et al. By an extensive study of the model over a wide range of temperatures, densities and pressures a very detailed information has been provided about elastic behaviour of the Gay-Berne nematic. Interestingly, it is found that the results for the surface elastic constants are qualitatively different than those obtained with the help of analytical approximations for the isotropic, direct pair correlation function. For example, the values of the surface elastic constants are negative and an order of magnitude smaller than the bulk elasticity. (author). 30 refs, 9 figs

  3. Utilization of computational simulator for comparison of correlations in multiphase flow in ESP (Electrical Submersible Pumping) systems; Utilizacao de simulador computacional para a comparacao das correlacoes de escoamento multifasico em sistemas BCS

    Energy Technology Data Exchange (ETDEWEB)

    Anjos, Roselaine M. dos; Maitelli, Carla Wilza S.P.; Maitelli, Andre L. [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil); Costa, Rutacio O. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    Electrical Submersible Pumping (ESP) is an artificial lifting method which can be used both onshore and offshore for the production of high flow rates of liquid. By using the computational simulator for systems ESP developed by the AUTOPOC/LAUT - UFRN, this work aimed to evaluate empirical correlations for calculation of multiphase flow in tubing typical of artificial lifting systems operating by ESP. The parameters used for evaluating the correlations are some of the dynamic variables of the system such as head that indicates the lifting capacity of the system, the flow rate of fluid in the pump and the discharge pressure at the pump. Five (5) correlations were evaluated, from which only one considered slipping between phases, but does not take into account flow patterns and, four others considering slipping between the phases as well the flow patterns. The simulation results obtained for all these correlations were compared to results from a commercial computational simulator, extensively used in the oil industry. For both simulators, input values and simulation time, were virtually the same. The results showed that the simulator used in this work showed satisfactory performance, since no significant differences from those obtained with the commercial simulator. (author)

  4. Isotope correlations for safeguards surveillance and accountancy methods

    International Nuclear Information System (INIS)

    Persiani, P.J.; Kalimullah.

    1983-01-01

    Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The US/DOE/OSS Isotope Correlations for Surveillance and Accountancy Methods (ICSAM) program has been structured into three phases: (1) the analytical development of Isotope Correlation Technique (ICT) for actual power reactor fuel cycles; (2) the development of a dedicated portable ICT computer system for in-field implementation, and (3) the experimental program for measurement of U, Pu isotopics in representative spent fuel-rods of the initial 3 or 4 burnup cycles of the Commonwealth Edison Zion -1 and -2 PWR power plants. Since any particular correlation could generate different curves depending upon the type and positioning of the fuel assembly, a 3-D reactor model and 2-group cross section depletion calculation for the first cycle of the ZION-2 was performed with each fuel assembly as a depletion block. It is found that for a given PWR all assemblies with a unique combination of enrichment zone and number of burnable poison rods (BPRs) generate one coincident curve. Some correlations are found to generate a single curve for assemblies of all enrichments and number of BPRs. The 8 axial segments of the 3-D calculation generate one coincident curve for each correlation. For some correlations the curve for the full assembly homogenized over core-height deviates from the curve for the 8 axial segments, and for other correlations coincides with the curve for the segments. The former behavior is primarily based on the transmutation lag between the end segment and the middle segments. The experimental implication is that the isotope correlations exhibiting this behavior can be determined by dissolving a full assembly but not by dissolving only an axial segment, or pellets

  5. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  6. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  7. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  8. Fast electronic structure methods for strongly correlated molecular systems

    International Nuclear Information System (INIS)

    Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung

    2005-01-01

    A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given

  9. Local Field Response Method Phenomenologically Introducing Spin Correlations

    Science.gov (United States)

    Tomaru, Tatsuya

    2018-03-01

    The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.

  10. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    International Nuclear Information System (INIS)

    Xiang, Hao; Chen, Bin

    2015-01-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ  = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We 0.28 Fr 0.78 (We is the Weber number, Fr is the Froude number). (paper)

  11. Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds

    Science.gov (United States)

    Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail

    2011-01-01

    Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.

  12. Particle-transport simulation with the Monte Carlo method

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.

    1975-01-01

    Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)

  13. Status of the Correlation Process of the V-HAB Simulation with Ground Tests and ISS Telemetry Data

    Science.gov (United States)

    Ploetner, P.; Roth, C.; Zhukov, A.; Czupalla, M.; Anderson, M.; Ewert, M.

    2013-01-01

    The Virtual Habitat (V-HAB) is a dynamic Life Support System (LSS) simulation, created for investigation of future human spaceflight missions. It provides the capability to optimize LSS during early design phases. The focal point of the paper is the correlation and validation of V-HAB against ground test and flight data. In order to utilize V-HAB to design an Environmental Control and Life Support System (ECLSS) it is important to know the accuracy of simulations, strengths and weaknesses. Therefore, simulations of real systems are essential. The modeling of the International Space Station (ISS) ECLSS in terms of single technologies as well as an integrated system and correlation against ground and flight test data is described. The results of the simulations make it possible to prove the approach taken by V-HAB.

  14. Flow velocity measurement by using zero-crossing polarity cross correlation method

    International Nuclear Information System (INIS)

    Xu Chengji; Lu Jinming; Xia Hong

    1993-01-01

    Using the designed correlation metering system and a high accurate hot-wire anemometer as a calibration device, the experimental study of correlation method in a tunnel was carried out. The velocity measurement of gas flow by using zero-crossing polarity cross correlation method was realized and the experimental results has been analysed

  15. Natural tracer test simulation by stochastic particle tracking method

    International Nuclear Information System (INIS)

    Ackerer, P.; Mose, R.; Semra, K.

    1990-01-01

    Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)

  16. Simulation teaching method in Engineering Optics

    Science.gov (United States)

    Lu, Qieni; Wang, Yi; Li, Hongbin

    2017-08-01

    We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.

  17. Correlation of etho-social and psycho-social data from "Mars-500" interplanetary simulation

    Science.gov (United States)

    Tafforin, Carole; Vinokhodova, Alla; Chekalina, Angelina; Gushin, Vadim

    2015-06-01

    Studies of social groups under isolation and confinement for the needs of space psychology were mostly limited by questionnaires completed with batteries of subjective tests, and they needed to be correlated with video recordings for objective analyses in space ethology. The aim of the present study is to identify crewmembers' behavioral profiles for better understanding group dynamics during a 520-day isolation and confinement of the international crew (n=6) participating to the "Mars-500" interplanetary simulation. We propose to correlate data from PSPA (Personal Self-Perception and Attitudes) computerized test, sociometric questionnaires and color choices test (Luscher test) used to measure anxiety levels, with data of video analysis during group discussion (GD) and breakfast time (BT). All the procedures were implemented monthly - GD, or twice a month - BT. Firstly, we used descriptive statistics for displaying quantitative subjects' behavioral profiles, supplied with a software based-solution: the Observer XT®. Secondly, we used Spearmen's nonparametric correlation analysis. The results show that for each subject, the level of non-verbal behavior ("visual interactions", "object interactions", "body interaction", "personal actions", "facial expressions", and "collateral acts") is higher than the level of verbal behavior ("interpersonal communication in Russian", and "interpersonal communication in English"). From the video analyses, dynamics profiles over months are different between the crewmembers. From the correlative analyses, we found highly negative correlations between anxiety and interpersonal communications; and between the sociometric parameter "popularity in leisure environment" and anxiety level. We also found highly significant positive correlations between the sociometric parameter "popularity in working environment" and interpersonal communications, and facial expressions; and between the sociometric parameter "popularity in leisure environment

  18. Research methods of simulate digital compensators and autonomous control systems

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.

  19. Science classroom inquiry (SCI simulations: a novel method to scaffold science learning.

    Directory of Open Access Journals (Sweden)

    Melanie E Peffer

    Full Text Available Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.

  20. Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.

    Science.gov (United States)

    Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda

    2015-01-01

    Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.

  1. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  2. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  3. Strongly Correlated Systems Theoretical Methods

    CERN Document Server

    Avella, Adolfo

    2012-01-01

    The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...

  4. Strongly correlated systems numerical methods

    CERN Document Server

    Mancini, Ferdinando

    2013-01-01

    This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...

  5. Numerical simulation on single bubble rising behavior in liquid metal using moving particle semi-implicit method

    International Nuclear Information System (INIS)

    Zuo Juanli; Tian Wenxi; Qiu Suizheng; Chen Ronghua; Su Guanghui

    2011-01-01

    The gas-lift pump in liquid metal cooling fast reactor (LMFR) is an innovational conceptual design to enhance the natural circulation ability of reactor core. The two-phase flow character of gas-liquid metal makes significant improvement of the natural circulation capacity and reactor safety. In present basic study, the rising behavior of a single nitrogen bubble in five kinds of liquid metals (lead bismuth alloy, liquid kalium, sodium, potassium sodium alloy and lithium lead alloy) was numerically simulated using moving particle semi-implicit (MPS) method. The whole growing process of single nitrogen bubble in liquid metal was captured. The bubble shape and rising speed of single nitrogen bubble in each liquid metal were compared. The comparison between simulation results using MPS method and Grace graphical correlation shows a good agreement. (authors)

  6. Simulation of tunneling construction methods of the Cisumdawu toll road

    Science.gov (United States)

    Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.

    2017-11-01

    Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.

  7. Machine learning using a higher order correlation network

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.C.; Doolen, G.; Chen, H.H.; Sun, G.Z.; Maxwell, T.; Lee, H.Y.

    1986-01-01

    A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.

  8. Correlation between different methods of intra- abdominal pressure ...

    African Journals Online (AJOL)

    This study aimed to determine the correlation between transvesical ... circumstances may arise where this method is not viable and alternative methods ..... The polycompartment syndrome: A concise state-of-the- art review. ... hypertension in a mixed population of critically ill patients: A multiple-center epidemiological study.

  9. Factorization method for simulating QCD at finite density

    International Nuclear Information System (INIS)

    Nishimura, Jun

    2003-01-01

    We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)

  10. Spectral Methods in Numerical Plasma Simulation

    DEFF Research Database (Denmark)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...

  11. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    Science.gov (United States)

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  12. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  13. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  14. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  15. Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System

    Directory of Open Access Journals (Sweden)

    Minghong She

    2018-01-01

    Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.

  16. Method of simulating dose reduction for digital radiographic systems

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.

    2005-01-01

    The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)

  17. 3D Rigid Registration by Cylindrical Phase Correlation Method

    Czech Academy of Sciences Publication Activity Database

    Bican, Jakub; Flusser, Jan

    2009-01-01

    Roč. 30, č. 10 (2009), s. 914-921 ISSN 0167-8655 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/1593 Grant - others:GAUK(CZ) 48908 Institutional research plan: CEZ:AV0Z10750506 Keywords : 3D registration * correlation methods * Image registration Subject RIV: BD - Theory of Information Impact factor: 1.303, year: 2009 http://library.utia.cas.cz/separaty/2009/ZOI/bican-3d digit registration by cylindrical phase correlation method.pdf

  18. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  19. Correlated volume-energy fluctuations of phospholipid membranes: A simulation study

    DEFF Research Database (Denmark)

    Pedersen, Ulf. R.; Peters, Günther H.J.; Schröder, Thomas B.

    2010-01-01

    This paper reports all-atom computer simulations of five phospholipid membranes (DMPC, DPPC, DMPG, DMPS, and DMPSH) with focus on the thermal equilibrium fluctuations of volume, energy, area, thickness, and chain order. At constant temperature and pressure, volume and energy exhibit strong...... membranes, showing a similar picture. The cause of the observed strong correlations is identified by splitting volume and energy into contributions from tails, heads, and water, and showing that the slow volume−energy fluctuations derive from van der Waals interactions of the tail region; they are thus...

  20. Occurrence and simulation of trihalomethanes in swimming pool water: A simple prediction method based on DOC and mass balance.

    Science.gov (United States)

    Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald

    2016-01-01

    Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Adaptive implicit method for thermal compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)

    2008-10-15

    As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.

  2. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    Science.gov (United States)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of

  3. Non-whole beat correlation method for the identification of an unbalance response of a dual-rotor system with a slight rotating speed difference

    Science.gov (United States)

    Zhang, Z. X.; Wang, L. Z.; Jin, Z. J.; Zhang, Q.; Li, X. L.

    2013-08-01

    The efficient identification of the unbalanced responses in the inner and outer rotors from the beat vibration is the key step in the dynamic balancing of a dual-rotor system with a slight rotating speed difference. This paper proposes a non-whole beat correlation method to identify the unbalance responses whose integral time is shorter than the whole beat correlation method. The principle, algorithm and parameter selection of the proposed method is emphatically demonstrated in this paper. From the numerical simulation and balancing experiment conducted on horizontal decanter centrifuge, conclusions can be drawn that the proposed approach is feasible and practicable. This method makes important sense in developing the field balancing equipment based on portable Single Chip Microcomputer (SCMC) with low expense.

  4. Constraint methods that accelerate free-energy simulations of biomolecules.

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  5. Modelling of a stirling cryocooler regenerator under steady and steady - periodic flow conditions using a correlation based method

    Science.gov (United States)

    Kishor Kumar, V. V.; Kuzhiveli, B. T.

    2017-12-01

    The performance of a Stirling cryocooler depends on the thermal and hydrodynamic properties of the regenerator in the system. CFD modelling is the best technique to design and predict the performance of a Stirling cooler. The accuracy of the simulation results depend on the hydrodynamic and thermal transport parameters used as the closure relations for the volume averaged governing equations. A methodology has been developed to quantify the viscous and inertial resistance terms required for modelling the regenerator as a porous medium in Fluent. Using these terms, the steady and steady - periodic flow of helium through regenerator was modelled and simulated. Comparison of the predicted and experimental pressure drop reveals the good predictive power of the correlation based method. For oscillatory flow, the simulation could predict the exit pressure amplitude and the phase difference accurately. Therefore the method was extended to obtain the Darcy permeability and Forchheimer’s inertial coefficient of other wire mesh matrices applicable to Stirling coolers. Simulation of regenerator using these parameters will help to better understand the thermal and hydrodynamic interactions between working fluid and the regenerator material, and pave the way to contrive high performance, ultra-compact free displacers used in miniature Stirling cryocoolers in the future.

  6. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  7. Growing correlation length on cooling below the onset of caging in a simulated glass-forming liquid

    DEFF Research Database (Denmark)

    Lačević, N.; Starr, F. W.; Schrøder, Thomas

    2002-01-01

    We present a calculation of a fourth-order, time-dependent density correlation function that measures higher-order spatiotemporal correlations of the density of a liquid. From molecular dynamics simulations of a glass-forming Lennard-Jones liquid, we find that the characteristic length scale...... of the dynamics of the liquid in the alpha-relaxation regime....

  8. Hybrid Method Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye

    This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...

  9. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  10. DRK methods for time-domain oscillator simulation

    NARCIS (Netherlands)

    Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.

  11. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    Science.gov (United States)

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  12. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    Science.gov (United States)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from

  13. Characteristics and correlation of various radiation measuring methods in spatial radiation measurement

    International Nuclear Information System (INIS)

    Yoneda, Kazuhiro; Tonouchi, Shigemasa

    1992-01-01

    When the survey of the state of natural radiation distribution was carried out, for the purpose of examining the useful measuring method, the comparison of the γ-ray dose rate calculated from survey meter method, in-situ measuring method and the measuring method by sampling soil was carried out. Between the in-situ measuring method and the survey meter method, the correlation Y=0.986X+5.73, r=0.903, n=18, P<0.01 was obtained, and the high correlation having the inclination of nearly 1 was shown. Between the survey meter method and the measuring method by sampling soil, the correlation Y=1.297X-10.30, r=0.966, n=20 P<0.01 was obtained, and the high correlation was shown, but as for the dose rate contribution, the disparities of 36% in U series, 6% in Th series and 20% in K-40 were observed. For the survey of the state of natural radiation distribution, the method of using in combination the survey meter method and the in-situ measuring method or the measuring method by sampling soil is suitable. (author)

  14. Numerical simulation methods for electron and ion optics

    International Nuclear Information System (INIS)

    Munro, Eric

    2011-01-01

    This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.

  15. A Monte Carlo method and finite volume method coupled optical simulation method for parabolic trough solar collectors

    International Nuclear Information System (INIS)

    Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing

    2017-01-01

    Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of

  16. Simulation methods for nuclear production scheduling

    International Nuclear Information System (INIS)

    Miles, W.T.; Markel, L.C.

    1975-01-01

    Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references

  17. SIMULATIONS OF WIDE-FIELD WEAK-LENSING SURVEYS. II. COVARIANCE MATRIX OF REAL-SPACE CORRELATION FUNCTIONS

    International Nuclear Information System (INIS)

    Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi

    2011-01-01

    Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.

  18. Generalized drift-flux correlation

    International Nuclear Information System (INIS)

    Takeuchi, K.; Young, M.Y.; Hochreiter, L.E.

    1991-01-01

    A one-dimensional drift-flux model with five conservation equations is frequently employed in major computer codes, such as TRAC-PD2, and in simulator codes. In this method, the relative velocity between liquid and vapor phases, or slip ratio, is given by correlations, rather than by direct solution of the phasic momentum equations, as in the case of the two-fluid model used in TRAC-PF1. The correlations for churn-turbulent bubbly flow and slug flow regimes were given in terms of drift velocities by Zuber and Findlay. For the annular flow regime, the drift velocity correlations were developed by Ishii et al., using interphasic force balances. Another approach is to define the drift velocity so that flooding and liquid hold-up conditions are properly simulated, as reported here. The generalized correlation is used to reanalyze the MB-2 test data for two-phase flow in a large-diameter pipe. The results are applied to the generalized drift flux velocity, whose relationship to the other correlations is discussed. Finally, the generalized drift flux correlation is implemented in TRAC-PD2. Flow reversal from countercurrent to cocurrent flow is computed in small-diameter U-shaped tubes and is compared with the flooding curve

  19. Spatial correlation of probabilistic earthquake ground motion and loss

    Science.gov (United States)

    Wesson, R.L.; Perkins, D.M.

    2001-01-01

    Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.

  20. A tool for simulating parallel branch-and-bound methods

    Science.gov (United States)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  1. The perturbed angular correlation method - a modern technique in studying solids

    International Nuclear Information System (INIS)

    Unterricker, S.; Hunger, H.J.

    1979-01-01

    Starting from theoretical fundamentals the differential perturbed angular correlation method has been explained. By using the probe nucleus 111 Cd the magnetic dipole interaction in Fesub(x)Alsub(1-x) alloys and the electric quadrupole interaction in Cd have been measured. The perturbed angular correlation method is a modern nuclear measuring method and can be applied in studying ordering processes, phase transformations and radiation damages in metals, semiconductors and insulators

  2. Correlation of simulated TEM images with irradiation induced damage

    International Nuclear Information System (INIS)

    Schaeublin, R.; Almeida, P. de; Almazouzi, A.; Victoria, M.

    2000-01-01

    Crystal damage induced by irradiation is investigated using transmission electron microscopy (TEM) coupled to molecular dynamics (MD) calculations. The displacement cascades are simulated for energies ranging from 10 to 50 keV in Al, Ni and Cu and for times of up to a few tens of picoseconds. Samples are then used to perform simulations of the TEM images that one could observe experimentally. Diffraction contrast is simulated using a method based on the multislice technique. It appears that the cascade induced damage in Al imaged in weak beam exhibits little contrast, which is too low to be experimentally visible, while in Ni and Cu a good contrast is observed. The number of visible clusters is always lower than the actual one. Conversely, high resolution TEM (HRTEM) imaging allows most of the defects contained in the sample to be observed, although experimental difficulties arise due to the low contrast intensity of the smallest defects. Single point defects give rise in HTREM to a contrast that is similar to that of cavities. TEM imaging of the defects is discussed in relation to the actual size of the defects and to the number of clusters deduced from MD simulations

  3. 3D spatially-adaptive canonical correlation analysis: Local and global methods.

    Science.gov (United States)

    Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar

    2018-04-01

    Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Real time simulation method for fast breeder reactors dynamics

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.

    1985-01-01

    The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)

  5. Numeric simulation model for long-term orthodontic tooth movement with contact boundary conditions using the finite element method.

    Science.gov (United States)

    Hamanaka, Ryo; Yamaoka, Satoshi; Anh, Tuan Nguyen; Tominaga, Jun-Ya; Koga, Yoshiyuki; Yoshida, Noriaki

    2017-11-01

    Although many attempts have been made to simulate orthodontic tooth movement using the finite element method, most were limited to analyses of the initial displacement in the periodontal ligament and were insufficient to evaluate the effect of orthodontic appliances on long-term tooth movement. Numeric simulation of long-term tooth movement was performed in some studies; however, neither the play between the brackets and archwire nor the interproximal contact forces were considered. The objectives of this study were to simulate long-term orthodontic tooth movement with the edgewise appliance by incorporating those contact conditions into the finite element model and to determine the force system when the space is closed with sliding mechanics. We constructed a 3-dimensional model of maxillary dentition with 0.022-in brackets and 0.019 × 0.025-in archwire. Forces of 100 cN simulating sliding mechanics were applied. The simulation was accomplished on the assumption that bone remodeling correlates with the initial tooth displacement. This method could successfully represent the changes in the moment-to-force ratio: the tooth movement pattern during space closure. We developed a novel method that could simulate the long-term orthodontic tooth movement and accurately determine the force system in the course of time by incorporating contact boundary conditions into finite element analysis. It was also suggested that friction is progressively increased during space closure in sliding mechanics. Copyright © 2017. Published by Elsevier Inc.

  6. Reconstructing the ideal results of a perturbed analog quantum simulator

    Science.gov (United States)

    Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael

    2018-04-01

    Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.

  7. Rehme correlation for spacer pressure drop compared to XT-ADS rod bundle simulations and water experiment

    International Nuclear Information System (INIS)

    Batta, A.; Class, A.; Litfin, K.; Wetzel, T.

    2011-01-01

    The Rehme correlation is the most common formula to estimate the pressure drop of spacers in the design phase of new bundle geometries. It is based on considerations of momentum losses and takes into account the obstruction of the flow cross section but it ignores the geometric details of the spacer design. Within the framework of accelerator driven sub-critical reactor systems (ADS), heavy-liquid-metal (HLM) cooled fuel assemblies are considered. At the KArlsruhe Liquid metal LAboratory (KALLA) of the Karlsruhe Institute of Technology a series of experiments to quantify both pressure losses and heat transfer in HLM-cooled rod bundles are performed. The present study compares simulation results obtained with the commercial CFD code Star-CCM to experiments and the Rehme correlation. It can be shown that the Rehme correlation, simulations and experiments all yield similar trends, but quantitative predictions can only be delivered by the CFD which takes into account the full geometric details of the spacer geometry. (orig.)

  8. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    International Nuclear Information System (INIS)

    Žukovič, Milan; Hristopulos, Dionissios T

    2009-01-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the N c -state Potts model, each point is assigned to one of N c classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of

  9. Plasma simulations using the Car-Parrinello method

    International Nuclear Information System (INIS)

    Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.

    1990-01-01

    A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma

  10. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  11. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  12. Development of the DQFM method to consider the effect of correlation of component failures in seismic PSA of nuclear power plant

    International Nuclear Information System (INIS)

    Watanabe, Yuichi; Oikawa, Tetsukuni; Muramatsu, Ken

    2003-01-01

    This paper presents a new calculation method for considering the effect of correlation of component failures in seismic probabilistic safety assessment (PSA) of nuclear power plants (NPPs) by direct quantification of Fault Tree (FT) using the Monte Carlo simulation (DQFM) and discusses the effect of correlation on core damage frequency (CDF). In the DQFM method, occurrence probability of a top event is calculated as follows: (1) Response and capacity of each component are generated according to their probability distribution. In this step, the response and capacity can be made correlated according to a set of arbitrarily given correlation data. (2) For each component whether the component is failed or not is judged by comparing the response and the capacity. (3) The status of each component, failure or success, is assigned as either TRUE or FALSE in a Truth Table, which represents the logical structure of the FT to judge the occurrence of the top event. After this trial is iterated sufficient times, the occurrence probability of the top event is obtained as the ratio of the occurrence number of the top event to the number of total iterations. The DQFM method has the following features compared with the minimal cut set (MCS) method used in the well known Seismic Safety Margins Research Program (SSMRP). While the MCS method gives the upper bound approximation for occurrence probability of an union of MCSs, the DQFM method gives more exact results than the upper bound approximation. Further, the DQFM method considers the effect of correlation on the union and intersection of component failures while the MCS method considers only the effect on the latter. The importance of these features in seismic PSA of NPPs are demonstrated by an example calculation and a calculation of CDF in a seismic PSA. The effect of correlation on CDF was evaluated by the DQFM method and was compared with that evaluated in the application study of the SSMRP methodology. In the application

  13. A tool for simulating parallel branch-and-bound methods

    Directory of Open Access Journals (Sweden)

    Golubeva Yana

    2016-01-01

    Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  14. Massively parallel simulations of strong electronic correlations: Realistic Coulomb vertex and multiplet effects

    Science.gov (United States)

    Baumgärtel, M.; Ghanem, K.; Kiani, A.; Koch, E.; Pavarini, E.; Sims, H.; Zhang, G.

    2017-07-01

    We discuss the efficient implementation of general impurity solvers for dynamical mean-field theory. We show that both Lanczos and quantum Monte Carlo in different flavors (Hirsch-Fye, continuous-time hybridization- and interaction-expansion) exhibit excellent scaling on massively parallel supercomputers. We apply these algorithms to simulate realistic model Hamiltonians including the full Coulomb vertex, crystal-field splitting, and spin-orbit interaction. We discuss how to remove the sign problem in the presence of non-diagonal crystal-field and hybridization matrices. We show how to extract the physically observable quantities from imaginary time data, in particular correlation functions and susceptibilities. Finally, we present benchmarks and applications for representative correlated systems.

  15. Simulation of plume dynamics by the Lattice Boltzmann Method

    Science.gov (United States)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  16. Novel Methods for Electromagnetic Simulation and Design

    Science.gov (United States)

    2016-08-03

    modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow

  17. Correlations between contouring similarity metrics and simulated treatment outcome for prostate radiotherapy

    Science.gov (United States)

    Roach, D.; Jameson, M. G.; Dowling, J. A.; Ebert, M. A.; Greer, P. B.; Kennedy, A. M.; Watt, S.; Holloway, L. C.

    2018-02-01

    Many similarity metrics exist for inter-observer contouring variation studies, however no correlation between metric choice and prostate cancer radiotherapy dosimetry has been explored. These correlations were investigated in this study. Two separate trials were undertaken, the first a thirty-five patient cohort with three observers, the second a five patient dataset with ten observers. Clinical and planning target volumes (CTV and PTV), rectum, and bladder were independently contoured by all observers in each trial. Structures were contoured on T2-weighted MRI and transferred onto CT following rigid registration for treatment planning in the first trial. Structures were contoured directly on CT in the second trial. STAPLE and majority voting volumes were generated as reference gold standard volumes for each structure for the two trials respectively. VMAT treatment plans (78 Gy to PTV) were simulated for observer and gold standard volumes, and dosimetry assessed using multiple radiobiological metrics. Correlations between contouring similarity metrics and dosimetry were calculated using Spearman’s rank correlation coefficient. No correlations were observed between contouring similarity metrics and dosimetry for CTV within either trial. Volume similarity correlated most strongly with radiobiological metrics for PTV in both trials, including TCPPoisson (ρ  =  0.57, 0.65), TCPLogit (ρ  =  0.39, 0.62), and EUD (ρ  =  0.43, 0.61) for each respective trial. Rectum and bladder metric correlations displayed no consistency for the two trials. PTV volume similarity was found to significantly correlate with rectum normal tissue complication probability (ρ  =  0.33, 0.48). Minimal to no correlations with dosimetry were observed for overlap or boundary contouring metrics. Future inter-observer contouring variation studies for prostate cancer should incorporate volume similarity to provide additional insights into dosimetry during analysis.

  18. Experimental study on reactivity measurement in thermal reactor by polarity correlation method

    International Nuclear Information System (INIS)

    Yasuda, Hideshi

    1977-11-01

    Experimental study on the polarity correlation method for measuring the reactivity of a thermal reactor, especially the one possessing long prompt neutron lifetime such as graphite on heavy water moderated core, is reported. The techniques of reactor kinetics experiment are briefly reviewed, which are classified in two groups, one characterized by artificial disturbance to a reactor and the other by natural fluctuation inherent in a reactor. The fluctuation phenomena of neutron count rate are explained using F. de Hoffman's stochastic method, and correlation functions for the neutron count rate fluctuation are shown. The experimental results by polarity correlation method applied to the β/l measurements in both graphite-moderated SHE core and light water-moderated JMTRC and JRR-4 cores, and also to the measurement of SHE shut down reactivity margin are presented. The measured values were in good agreement with those by a pulsed neutron method in the reactivity range from critical to -12 dollars. The conditional polarity correlation experiments in SHE at -20 cent and -100 cent are demonstrated. The prompt neutron decay constants agreed with those obtained by the polarity correlation experiments. The results of experiments measuring large negative reactivity of -52 dollars of SHE by pulsed neutron, rod drop and source multiplication methods are given. Also it is concluded that the polarity and conditional polarity correlation methods are sufficiently applicable to noise analysis of a low power thermal reactor with long prompt neutron lifetime. (Nakai, Y.)

  19. Dynamical correlations in finite nuclei: A simple method to study tensor effects

    International Nuclear Information System (INIS)

    Dellagiacoma, F.; Orlandini, G.; Traini, M.

    1983-01-01

    Dynamical correlations are introduced in finite nuclei by changing the two-body density through a phenomenological method. The role of tensor and short-range correlations in nuclear momentum distribution, electric form factor and two-body density of 4 He is investigated. The importance of induced tensor correlations in the total photonuclear cross section is reinvestigated providing a successful test of the method proposed here. (orig.)

  20. A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

    International Nuclear Information System (INIS)

    Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.

    2008-01-01

    A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient

  1. Distance correlation methods for discovering associations in large astrophysical databases

    International Nuclear Information System (INIS)

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P.

    2014-01-01

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.

  2. Simulation methods with extended stability for stiff biochemical Kinetics

    Directory of Open Access Journals (Sweden)

    Rué Pau

    2010-08-01

    Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.

  3. A nondissipative simulation method for the drift kinetic equation

    International Nuclear Information System (INIS)

    Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya

    2001-07-01

    With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)

  4. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    Science.gov (United States)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  5. ERP Correlates of Simulated Purchase Decisions.

    Science.gov (United States)

    Gajewski, Patrick D; Drizinsky, Jessica; Zülch, Joachim; Falkenstein, Michael

    2016-01-01

    Decision making in economic context is an everyday activity but its neuronal correlates are poorly understood. The present study aimed at investigating the electrophysiological brain activity during simulated purchase decisions of technical products for a lower or higher price relative to a mean price estimated in a pilot study. Expectedly, participants mostly decided to buy a product when it was cheap and not to buy when it was expensive. However, in some trials they made counter-conformity decisions to buy a product for a higher than the average price or not to buy it despite an attractive price. These responses took more time and the variability of the response latency was enhanced relative to conformity responses. ERPs showed enhanced conflict related fronto-central N2 during both types of counter-conformity compared to conformity decisions. A reverse pattern was found for the P3a and P3b. The response-locked P3 (r-P3) was larger and the subsequent CNV smaller for counter-conformity than conformity decisions. We assume that counter-conformity decisions elevate the response threshold (larger N2), intensify response evaluation (r-P3) and attenuate the preparation for the next trial (CNV). These effects were discussed in the framework of the functional role of the fronto-parietal cortex in economic decision making.

  6. ERP correlates of simulated purchase decisions

    Directory of Open Access Journals (Sweden)

    Patrick Darius Gajewski

    2016-08-01

    Full Text Available Decision making in economic context is an everyday activity but its neuronal correlates are poorly understood. The present study aimed at investigating the electrophysiological brain activity during simulated purchase decisions of technical products for a lower or higher price relative to a mean price estimated in a pilot study. Expectedly, participants mostly decided to buy a product when it was cheap and not to buy when it was expensive. But in some trials they made counter-conformity decisions to buy a product for more money than the average price or not to buy a product despite an attractive price. These responses took more time and the variability of the response latency was enhanced relative to conformity responses. ERPs showed enhanced conflict related fronto-central N2 during both types of counter-conformity compared to conformity decisions. A reverse pattern was found for the P3a and P3b. The response-locked P3 (r-P3 was larger and the subsequent CNV smaller for counter-conformity than conformity decisions. We assume that counter-conformity decisions elevate the response threshold (larger N2, intensify response evaluation (r-P3 and attenuate the preparation for the next trial (CNV. These effects were discussed in the framework of the functional role of the fronto-parietal cortex in economic decision making.

  7. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  8. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  9. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  10. A unitary correlation operator method

    International Nuclear Information System (INIS)

    Feldmeier, H.; Neff, T.; Roth, R.; Schnack, J.

    1997-09-01

    The short range repulsion between nucleons is treated by a unitary correlation operator which shifts the nucleons away from each other whenever their uncorrelated positions are within the repulsive core. By formulating the correlation as a transformation of the relative distance between particle pairs, general analytic expressions for the correlated wave functions and correlated operators are given. The decomposition of correlated operators into irreducible n-body operators is discussed. The one- and two-body-irreducible parts are worked out explicitly and the contribution of three-body correlations is estimated to check convergence. Ground state energies of nuclei up to mass number A=48 are calculated with a spin-isospin-dependent potential and single Slater determinants as uncorrelated states. They show that the deduced energy-and mass-number-independent correlated two-body Hamiltonian reproduces all ''exact'' many-body calculations surprisingly well. (orig.)

  11. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  12. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  14. A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.

    Science.gov (United States)

    Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L

    2018-05-16

    During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.

  15. Research on Monte Carlo simulation method of industry CT system

    International Nuclear Information System (INIS)

    Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan

    2010-01-01

    There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)

  16. Two pion correlation from SPACER

    International Nuclear Information System (INIS)

    Csoergoe, T.; Zimanyi, J.; Pratt, S.

    1989-12-01

    The correlation function for neutral and negative pions produced in ultrarelativistic heavy ion collisions was calculated without free parameters based on a space-time version of the LUND model, called SPACER: Simulation of Phase space distribution of Atomic nuclear Collisions in Energetic Reactions. This method includes the effect of Bose correlations for the emitted pion pair. Effects arising from correlations between space-time and momentum space distributions are investigated. The results are compared to the data of two different experiments. The role and interpretation of the chaocity parameter are discussed. (D.G.) 14 refs.; 4 figs

  17. Gastroesophageal reflux - correlation between diagnostic methods

    International Nuclear Information System (INIS)

    Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa; Lemme, Eponina Maria O.; Martinho, Maria Jose Ribeiro

    1999-01-01

    A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: a) the difference int he reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)

  18. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  19. Particle-gamma and particle-particle correlations in nuclear reactions using Monte Carlo Hauser-Feshback model

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory

    2010-01-01

    Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.

  20. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  1. A multiscale quantum mechanics/electromagnetics method for device simulations.

    Science.gov (United States)

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  2. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  3. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  4. Methods for simulation-based analysis of fluid-structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  5. Contribution to the simulation of hadron-nucleon inelastic interaction between 1 GeV to 20 GeV by Monte Carlo method

    International Nuclear Information System (INIS)

    Piquemal, Alain

    1982-01-01

    This work settles up a simulation model of inelastic hadron-nucleon interaction, using a Monte Carlo method. The creation of excited or stable particles and the decay of excited particles are simulated on the basis of the statistical thermodynamic model of HAGEDORN and of a relativistic kinematical treatment. The quantum identity of stable secondary particles is determined with the help of the statistical model of Fermi. In all cases of interactions the multiplicities and kinematical correlations are correctly reproduced by the simulation. Longitudinal and transversal momentum of secondary particles are also in good agreement with experimental results, in the case of non-diffractive collisions. [fr

  6. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  7. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  8. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  9. Development of digital image correlation method to analyse crack ...

    Indian Academy of Sciences (India)

    samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.

  10. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  11. Motion simulation of hydraulic driven safety rod using FSI method

    International Nuclear Information System (INIS)

    Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In

    2013-01-01

    Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results

  12. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  13. Performance evaluation of sea surface simulation methods for target detection

    Science.gov (United States)

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  14. Correlation of Solubility with the Metastable Limit of Nucleation Using Gauge-Cell Monte Carlo Simulations.

    Science.gov (United States)

    Clark, Michael D; Morris, Kenneth R; Tomassone, Maria Silvina

    2017-09-12

    We present a novel simulation-based investigation of the nucleation of nanodroplets from solution and from vapor. Nucleation is difficult to measure or model accurately, and predicting when nucleation should occur remains an open problem. Of specific interest is the "metastable limit", the observed concentration at which nucleation occurs spontaneously, which cannot currently be estimated a priori. To investigate the nucleation process, we employ gauge-cell Monte Carlo simulations to target spontaneous nucleation and measure thermodynamic properties of the system at nucleation. Our results reveal a widespread correlation over 5 orders of magnitude of solubilities, in which the metastable limit depends exclusively on solubility and the number density of generated nuclei. This three-way correlation is independent of other parameters, including intermolecular interactions, temperature, molecular structure, system composition, and the structure of the formed nuclei. Our results have great potential to further the prediction of nucleation events using easily measurable solute properties alone and to open new doors for further investigation.

  15. Quantifying the number of color centers in single fluorescent nanodiamonds by photon correlation spectroscopy and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain

    2009-01-01

    The number of negatively charged nitrogen-vacancy centers (N-V) - in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V) - fluorophores and simulating the probability distribution of their effective numbers (N e ), we found that the actual number (N a ) of the fluorophores is in linear correlation with N e , with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N a =8±1 for 28 nm FND particles prepared by 3 MeV proton irradiation

  16. Joint statistics of strongly correlated neurons via dimensionality reduction

    International Nuclear Information System (INIS)

    Deniz, Taşkın; Rotter, Stefan

    2017-01-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input. (paper)

  17. System dynamic simulation: A new method in social impact assessment (SIA)

    International Nuclear Information System (INIS)

    Karami, Shobeir; Karami, Ezatollah; Buys, Laurie; Drogemuller, Robin

    2017-01-01

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  18. System dynamic simulation: A new method in social impact assessment (SIA)

    Energy Technology Data Exchange (ETDEWEB)

    Karami, Shobeir, E-mail: shobeirkarami@gmail.com [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Karami, Ezatollah, E-mail: ekarami@shirazu.ac.ir [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Buys, Laurie, E-mail: l.buys@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia); Drogemuller, Robin, E-mail: robin.drogemuller@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia)

    2017-01-15

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  19. Tracing Method with Intra and Inter Protocols Correlation

    Directory of Open Access Journals (Sweden)

    Marin Mangri

    2009-05-01

    Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.

  20. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  1. Advance in research on aerosol deposition simulation methods

    International Nuclear Information System (INIS)

    Liu Keyang; Li Jingsong

    2011-01-01

    A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)

  2. The Moulded Site Data (MSD) wind correlation method: description and assessment

    Energy Technology Data Exchange (ETDEWEB)

    King, C.; Hurley, B.

    2004-12-01

    The long-term wind resource at a potential windfarm site may be estimated by correlating short-term on-site wind measurements with data from a regional meteorological station. A correlation method developed at Airtricity is described in sufficient detail to be reproduced. An assessment of its performance is also described; the results may serve as a guide to expected accuracy when using the method as part of an annual electricity production estimate for a proposed windfarm. (Author)

  3. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  4. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  5. Electromagnetic simulation using the FDTD method

    CERN Document Server

    Sullivan, Dennis M

    2013-01-01

    A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp

  6. A general method dealing with correlations in uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Qin Zhang

    1989-01-01

    This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)

  7. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  8. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  9. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    Brochi, M.A.C.

    1990-01-01

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  10. Atmospheric pollution measurement by optical cross correlation methods - A concept

    Science.gov (United States)

    Fisher, M. J.; Krause, F. R.

    1971-01-01

    Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.

  11. Error-diffusion binarization for joint transform correlators

    Science.gov (United States)

    Inbar, Hanni; Mendlovic, David; Marom, Emanuel

    1993-02-01

    A normalized nonlinearly scaled binary joint transform image correlator (JTC) based on a 1D error-diffusion binarization method has been studied. The behavior of the error-diffusion method is compared with hard-clipping, the most widely used method of binarized JTC approaches, using a single spatial light modulator. Computer simulations indicate that the error-diffusion method is advantageous for the production of a binarized power spectrum interference pattern in JTC configurations, leading to better definition of the correlation location. The error-diffusion binary JTC exhibits autocorrelation characteristics which are superior to those of the high-clipping binary JTC over the whole nonlinear scaling range of the Fourier-transform interference intensity for all noise levels considered.

  12. Direct numerical simulation of turbulent pipe flow using the lattice Boltzmann method

    Science.gov (United States)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2018-03-01

    In this paper, we present a first direct numerical simulation (DNS) of a turbulent pipe flow using the mesoscopic lattice Boltzmann method (LBM) on both a D3Q19 lattice grid and a D3Q27 lattice grid. DNS of turbulent pipe flows using LBM has never been reported previously, perhaps due to inaccuracy and numerical stability associated with the previous implementations of LBM in the presence of a curved solid surface. In fact, it was even speculated that the D3Q19 lattice might be inappropriate as a DNS tool for turbulent pipe flows. In this paper, we show, through careful implementation, accurate turbulent statistics can be obtained using both D3Q19 and D3Q27 lattice grids. In the simulation with D3Q19 lattice, a few problems related to the numerical stability of the simulation are exposed. Discussions and solutions for those problems are provided. The simulation with D3Q27 lattice, on the other hand, is found to be more stable than its D3Q19 counterpart. The resulting turbulent flow statistics at a friction Reynolds number of Reτ = 180 are compared systematically with both published experimental and other DNS results based on solving the Navier-Stokes equations. The comparisons cover the mean-flow profile, the r.m.s. velocity and vorticity profiles, the mean and r.m.s. pressure profiles, the velocity skewness and flatness, and spatial correlations and energy spectra of velocity and vorticity. Overall, we conclude that both D3Q19 and D3Q27 simulations yield accurate turbulent flow statistics. The use of the D3Q27 lattice is shown to suppress the weak secondary flow pattern in the mean flow due to numerical artifacts.

  13. A particle finite element method for machining simulations

    Science.gov (United States)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  14. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  15. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  16. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  17. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  18. Statistical analysis of simulation-generated time series : Systolic vs. semi-systolic correlation on the Connection Machine

    NARCIS (Netherlands)

    Dontje, T.; Lippert, Th.; Petkov, N.; Schilling, K.

    1992-01-01

    Autocorrelation becomes an increasingly important tool to verify improvements in the state of the simulational art in Latice Gauge Theory. Semi-systolic and full-systolic algorithms are presented which are intensively used for correlation computations on the Connection Machine CM-2. The

  19. Finite element method for simulation of the semiconductor devices

    International Nuclear Information System (INIS)

    Zikatanov, L.T.; Kaschiev, M.S.

    1991-01-01

    An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs

  20. Cross-correlation between EMG and center of gravity during quiet stance: theory and simulations.

    Science.gov (United States)

    Kohn, André Fabio

    2005-11-01

    Several signal processing tools have been employed in the experimental study of the postural control system in humans. Among them, the cross-correlation function has been used to analyze the time relationship between signals such as the electromyogram and the horizontal projection of the center of gravity. The common finding is that the electromyogram precedes the biomechanical signal, a result that has been interpreted in different ways, for example, the existence of feedforward control or the preponderance of a velocity feedback. It is shown here, analytically and by simulation, that the cross-correlation function is dependent in a complicated way on system parameters and on noise spectra. Results similar to those found experimentally, e.g., electromyogram preceding the biomechanical signal may be obtained in a postural control model without any feedforward control and without any velocity feedback. Therefore, correct interpretations of experimentally obtained cross-correlation functions may require additional information about the system. The results extend to other biomedical applications where two signals from a closed loop system are cross-correlated.

  1. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  2. Lagrangian velocity correlations in homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Gotoh, T.; Rogallo, R.S.; Herring, J.R.; Kraichnan, R.H.

    1993-01-01

    The Lagrangian velocity autocorrelation and the time correlations for individual wave-number bands are computed by direct numerical simulation (DNS) using the passive vector method (PVM), and the accuracy of the method is studied. It is found that the PVM is accurate when K max /k d ≥2 where K max is the maximum wave number carried in the simulation and k d is the Kolmogorov wave number. The Eulerian and Lagrangian time correlations for various wave-number bands are compared. At moderate to high wave number the Eulerian time correlation decays faster than the Lagrangian, and the effect of sweep on the former is observed. The time scale of the Eulerian correlation is found to be (kU 0 ) -1 while that of the Lagrangian is [∫ 0 k p 2 E(p)dp] -1/2 . The Lagrangian velocity autocorrelation in a frozen turbulent field is computed using the DIA, ALHDIA, and LRA theories and is compared with DNS measurements. The Markovianized Lagrangian renormalized approximation (MLRA) is compared with the DNS, and good agreement is found for one-time quantities in decaying turbulence at low Reynolds numbers and for the Lagrangian velocity autocorrelation in stationary turbulence at moderate Reynolds number. The effect of non-Gaussianity on the Lagrangian correlation predicted by the theories is also discussed

  3. A simulation method for lightning surge response of switching power

    International Nuclear Information System (INIS)

    Wei, Ming; Chen, Xiang

    2013-01-01

    In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.

  4. Methods for converging correlation energies within the dielectric matrix formalism

    Science.gov (United States)

    Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario

    2018-03-01

    Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.

  5. Parallel continuous simulated tempering and its applications in large-scale molecular simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)

    2014-07-28

    In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.

  6. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  7. The simulation methods based on 1D/3D collaborative computing for the vehicle integrated thermal management

    International Nuclear Information System (INIS)

    Lu, Pengyu; Gao, Qing; Wang, Yan

    2016-01-01

    Highlights: • A 1D/3D collaborative computing simulation method for vehicle thermal management. • Analyzing the influence of the thermodynamic systems and the engine compartment geometry on the vehicle performance. • Providing the basis for the matching energy consumptions of thermodynamic systems in the underhood. - Abstract: The vehicle integrated thermal management containing the engine cooling circuit, the air conditioning circuit, the turbocharged inter-cooled circuit, the engine lubrication circuit etc. is the important means of enhancing power performance, promoting economy, saving energy and reducing emission. In this study, a 1D/3D collaborative simulation method is proposed with the engine cooling circuit and air conditioning circuit being the research object. The mathematical characterizations of the multiple thermodynamic systems are achieved by 1D calculation and the underhood structure is described by 3D simulation. Through analyzing the engine compartment integrated heat transfer process, the model of the integrated thermal management system is formed after coupling the cooling circuit and air conditioning circuit. This collaborative simulation method establishes structured correlation of engine-cooling and air conditioning thermal dissipation in the engine compartment, comprehensively analyzing the engine working process and air condition operational process in order to research the interaction effect of them. In the calculation examples, to achieve the integrated optimization of multiple thermal systems design and performance prediction, by describing the influence of system thermomechanical parameters and operating duty to underhood heat transfer process, performance evaluation of the engine cooling circuit and the air conditioning circuit are realized.

  8. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  9. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Prosperetti, Andrea [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrative simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.

  10. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    Science.gov (United States)

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  11. Wind simulation for extreme and fatigue loads

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, M.; Larsen, G.C.; Mann, J.; Ott, S.; Hansen, K.S.; Pedersen, B.J.

    2004-01-01

    Measurements of atmospheric turbulence have been studied and found to deviate from a Gaussian process, in particular regarding the velocity increments over small time steps, where the tails of the pdf are exponential rather than Gaussian. Principles for extreme event counting and the occurrence of cascading events are presented. Empirical extreme statistics agree with Rices exceedence theory, when it is assumed that the velocity and its time derivative are independent. Prediction based on the assumption that the velocity is a Gaussian process underpredicts the rate of occurrence of extreme events by many orders of magnitude, mainly because the measured pdf is non-Gaussian. Methods for simulation of turbulent signals have been developed and their computational efficiency are considered. The methods are applicable for multiple processes with individual spectra and probability distributions. Non-Gaussian processes are simulated by the correlation-distortion method. Non-stationary processes are obtained by Bezier interpolation between a set of stationary simulations with identical random seeds. Simulation of systems with some signals available is enabled by conditional statistics. A versatile method for simulation of extreme events has been developed. This will generate gusts, velocity jumps, extreme velocity shears, and sudden changes of wind direction. Gusts may be prescribed with a specified ensemble average shape, and it is possible to detect the critical gust shape for a given construction. The problem is formulated as the variational problem of finding the most probable adjustment of a standard simulation of a stationary Gaussian process subject to relevant event conditions, which are formulated as linear combination of points in the realization. The method is generalized for multiple correlated series, multiple simultaneous conditions, and 3D fields of all velocity components. Generalization are presented for a single non-Gaussian process subject to relatively

  12. Nonequilibrium relaxation method – An alternative simulation strategy

    Indian Academy of Sciences (India)

    One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...

  13. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    Science.gov (United States)

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals.

    Science.gov (United States)

    Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch

    2011-11-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.

  15. Bragg's Law diffraction simulations for electron backscatter diffraction analysis

    International Nuclear Information System (INIS)

    Kacher, Josh; Landon, Colin; Adams, Brent L.; Fullwood, David

    2009-01-01

    In 2006, Angus Wilkinson introduced a cross-correlation-based electron backscatter diffraction (EBSD) texture analysis system capable of measuring lattice rotations and elastic strains to high resolution. A variation of the cross-correlation method is introduced using Bragg's Law-based simulated EBSD patterns as strain free reference patterns that facilitates the use of the cross-correlation method with polycrystalline materials. The lattice state is found by comparing simulated patterns to collected patterns at a number of regions on the pattern using the cross-correlation function and calculating the deformation from the measured shifts of each region. A new pattern can be simulated at the deformed state, and the process can be iterated a number of times to converge on the absolute lattice state. By analyzing an iteratively rotated single crystal silicon sample and recovering the rotation, this method is shown to have an angular resolution of ∼0.04 o and an elastic strain resolution of ∼7e-4. As an example of applications, elastic strain and curvature measurements are used to estimate the dislocation density in a single grain of a compressed polycrystalline Mg-based AZ91 alloy.

  16. Correlation between centre offsets and gas velocity dispersion of galaxy clusters in cosmological simulations

    Science.gov (United States)

    Li, Ming-Hua; Zhu, Weishan; Zhao, Dong

    2018-05-01

    The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.

  17. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  18. Spectral methods in numerical plasma simulation

    International Nuclear Information System (INIS)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)

  19. Activity coefficients from molecular simulations using the OPAS method

    Science.gov (United States)

    Kohns, Maximilian; Horsch, Martin; Hasse, Hans

    2017-10-01

    A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

  20. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  1. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  2. Petascale Many Body Methods for Complex Correlated Systems

    Science.gov (United States)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  3. A New Method to Simulate Free Surface Flows for Viscoelastic Fluid

    Directory of Open Access Journals (Sweden)

    Yu Cao

    2015-01-01

    Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.

  4. Solar panel thermal cycling testing by solar simulation and infrared radiation methods

    Science.gov (United States)

    Nuss, H. E.

    1980-01-01

    For the solar panels of the European Space Agency (ESA) satellites OTS/MAROTS and ECS/MARECS the thermal cycling tests were performed by using solar simulation methods. The performance data of two different solar simulators used and the thermal test results are described. The solar simulation thermal cycling tests for the ECS/MARECS solar panels were carried out with the aid of a rotatable multipanel test rig by which simultaneous testing of three solar panels was possible. As an alternative thermal test method, the capability of an infrared radiation method was studied and infrared simulation tests for the ultralight panel and the INTELSAT 5 solar panels were performed. The setup and the characteristics of the infrared radiation unit using a quartz lamp array of approx. 15 sq and LN2-cooled shutter and the thermal test results are presented. The irradiation uniformity, the solar panel temperature distribution, temperature changing rates for both test methods are compared. Results indicate the infrared simulation is an effective solar panel thermal testing method.

  5. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  6. Lagrangian numerical methods for ocean biogeochemical simulations

    Science.gov (United States)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  7. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  8. Clinical simulation as an evaluation method in health informatics

    DEFF Research Database (Denmark)

    Jensen, Sanne

    2016-01-01

    Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...

  9. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  10. Vectorization of a particle simulation method for hypersonic rarefied flow

    Science.gov (United States)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  11. Vectorization of a particle simulation method for hypersonic rarefied flow

    International Nuclear Information System (INIS)

    Mcdonald, J.D.; Baganoff, D.

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references

  12. A calculation method for RF couplers design based on numerical simulation by microwave studio

    International Nuclear Information System (INIS)

    Wang Rong; Pei Yuanji; Jin Kai

    2006-01-01

    A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

  13. The Predictive Value of Ultrasound Learning Curves Across Simulated and Clinical Settings

    DEFF Research Database (Denmark)

    Madsen, Mette E; Nørgaard, Lone N; Tabor, Ann

    2017-01-01

    OBJECTIVES: The aim of the study was to explore whether learning curves on a virtual-reality (VR) sonographic simulator can be used to predict subsequent learning curves on a physical mannequin and learning curves during clinical training. METHODS: Twenty midwives completed a simulation-based tra......OBJECTIVES: The aim of the study was to explore whether learning curves on a virtual-reality (VR) sonographic simulator can be used to predict subsequent learning curves on a physical mannequin and learning curves during clinical training. METHODS: Twenty midwives completed a simulation......-based training program in transvaginal sonography. The training was conducted on a VR simulator as well as on a physical mannequin. A subgroup of 6 participants underwent subsequent clinical training. During each of the 3 steps, the participants' performance was assessed using instruments with established...... settings. RESULTS: A good correlation was found between time needed to achieve predefined performance levels on the VR simulator and the physical mannequin (Pearson correlation coefficient .78; P VR simulator correlated well to the clinical performance scores (Pearson...

  14. Local normalization: Uncovering correlations in non-stationary financial time series

    Science.gov (United States)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  15. Contribution of the ultrasonic simulation to the testing methods qualification process

    International Nuclear Information System (INIS)

    Le Ber, L.; Calmon, P.; Abittan, E.

    2001-01-01

    The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)

  16. Prospects of Frequency-Time Correlation Analysis for Detecting Pipeline Leaks by Acoustic Emission Method

    International Nuclear Information System (INIS)

    Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E

    2014-01-01

    In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit

  17. Shale characteristics impact on Nuclear Magnetic Resonance (NMR fluid typing methods and correlations

    Directory of Open Access Journals (Sweden)

    Mohamed Mehana

    2016-06-01

    Full Text Available The development of shale reservoirs has brought a paradigm shift in the worldwide energy equation. This entails developing robust techniques to properly evaluate and unlock the potential of those reservoirs. The application of Nuclear Magnetic Resonance techniques in fluid typing and properties estimation is well-developed in conventional reservoirs. However, Shale reservoirs characteristics like pore size, organic matter, clay content, wettability, adsorption, and mineralogy would limit the applicability of the used interpretation methods and correlation. Some of these limitations include the inapplicability of the controlling equations that were derived assuming fast relaxation regime, the overlap of different fluids peaks and the lack of robust correlation to estimate fluid properties in shale. This study presents a state-of-the-art review of the main contributions presented on fluid typing methods and correlations in both experimental and theoretical side. The study involves Dual Tw, Dual Te, and doping agent's application, T1-T2, D-T2 and T2sec vs. T1/T2 methods. In addition, fluid properties estimation such as density, viscosity and the gas-oil ratio is discussed. This study investigates the applicability of these methods along with a study of the current fluid properties correlations and their limitations. Moreover, it recommends the appropriate method and correlation which are capable of tackling shale heterogeneity.

  18. Correlations between different methods of UO2 pellet density measurement

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki

    1977-07-01

    Density of UO 2 pellets was measured by three different methods, i.e., geometrical, water-immersed and meta-xylene immersed and treated statistically, to find out the correlations between UO 2 pellets are of six kinds but with same specifications. The correlations are linear 1 : 1 for pellets of 95% theoretical densities and above, but such do not exist below the level and variated statistically due to interaction between open and close pores. (auth.)

  19. Steam generator tube rupture simulation using extended finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken

    2016-08-15

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  20. Steam generator tube rupture simulation using extended finite element method

    International Nuclear Information System (INIS)

    Mohanty, Subhasish; Majumdar, Saurin; Natesan, Ken

    2016-01-01

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  1. Simulation methods supporting homologation of Electronic Stability Control in vehicle variants

    Science.gov (United States)

    Lutz, Albert; Schick, Bernhard; Holzmann, Henning; Kochem, Michael; Meyer-Tuve, Harald; Lange, Olav; Mao, Yiqin; Tosolin, Guido

    2017-10-01

    Vehicle simulation has a long tradition in the automotive industry as a powerful supplement to physical vehicle testing. In the field of Electronic Stability Control (ESC) system, the simulation process has been well established to support the ESC development and application by suppliers and Original Equipment Manufacturers (OEMs). The latest regulation of the United Nations Economic Commission for Europe UN/ECE-R 13 allows also for simulation-based homologation. This extends the usage of simulation from ESC development to homologation. This paper gives an overview of simulation methods, as well as processes and tools used for the homologation of ESC in vehicle variants. The paper first describes the generic homologation process according to the European Regulation (UN/ECE-R 13H, UN/ECE-R 13/11) and U.S. Federal Motor Vehicle Safety Standard (FMVSS 126). Subsequently the ESC system is explained as well as the generic application and release process at the supplier and OEM side. Coming up with the simulation methods, the ESC development and application process needs to be adapted for the virtual vehicles. The simulation environment, consisting of vehicle model, ESC model and simulation platform, is explained in detail with some exemplary use-cases. In the final section, examples of simulation-based ESC homologation in vehicle variants are shown for passenger cars, light trucks, heavy trucks and trailers. This paper is targeted to give a state-of-the-art account of the simulation methods supporting the homologation of ESC systems in vehicle variants. However, the described approach and the lessons learned can be used as reference in future for an extended usage of simulation-supported releases of the ESC system up to the development and release of driver assistance systems.

  2. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  3. Towards numerical simulations of supersonic liquid jets using ghost fluid method

    International Nuclear Information System (INIS)

    Majidi, Sahand; Afshari, Asghar

    2015-01-01

    Highlights: • A ghost fluid method based solver is developed for numerical simulation of compressible multiphase flows. • The performance of the numerical tool is validated via several benchmark problems. • Emergence of supersonic liquid jets in quiescent gaseous environment is simulated using ghost fluid method for the first time. • Bow-shock formation ahead of the liquid jet is clearly observed in the obtained numerical results. • Radiation of mach waves from the phase-interface witnessed experimentally is evidently captured in our numerical simulations. - Abstract: A computational tool based on the ghost fluid method (GFM) is developed to study supersonic liquid jets involving strong shocks and contact discontinuities with high density ratios. The solver utilizes constrained reinitialization method and is capable of switching between the exact and approximate Riemann solvers to increase the robustness. The numerical methodology is validated through several benchmark test problems; these include one-dimensional multiphase shock tube problem, shock–bubble interaction, air cavity collapse in water, and underwater-explosion. A comparison between our results and numerical and experimental observations indicate that the developed solver performs well investigating these problems. The code is then used to simulate the emergence of a supersonic liquid jet into a quiescent gaseous medium, which is the very first time to be studied by a ghost fluid method. The results of simulations are in good agreement with the experimental investigations. Also some of the famous flow characteristics, like the propagation of pressure-waves from the liquid jet interface and dependence of the Mach cone structure on the inlet Mach number, are reproduced numerically. The numerical simulations conducted here suggest that the ghost fluid method is an affordable and reliable scheme to study complicated interfacial evolutions in complex multiphase systems such as supersonic liquid

  4. A two-phase pressure drop calculation code based on a new method with a correlation factor obtained from an assessment of existing correlations

    International Nuclear Information System (INIS)

    Chun, Moon Hyun; Oh, Jae Guen

    1989-01-01

    Ten methods of the total two-phase pressure drop prediction based on five existing models and correlations have been examined for their accuracy and applicability to pressurized water reactor conditions. These methods were tested against 209 experimental data of local and bulk boiling conditions: Each correlations were evaluated for different ranges of pressure, mass velocity and quality, and best performing models were identified for each data subsets. A computer code entitled 'K-TWOPD' has been developed to calculate the total two phase pressure drop using the best performing existing correlations for a specific property range and a correction factor to compensate for the predicted error of the selected correlations. Assessment of this code shows that the present method fits all the available data within ±11% at a 95% confidence level compared with ± 25% for the existing correlations. (Author)

  5. Self-calibrated correlation imaging with k-space variant correlation functions.

    Science.gov (United States)

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. A Simulation Method Measuring Psychomotor Nursing Skills.

    Science.gov (United States)

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  7. Discrete Particle Method for Simulating Hypervelocity Impact Phenomena

    Directory of Open Access Journals (Sweden)

    Erkai Watson

    2017-04-01

    Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.

  8. Evaluation of a scatter correlation technique for single photon transmission measurements in PET by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wegmann, K.; Brix, G.

    2000-01-01

    Purpose: Single photon transmission (SPT) measurements offer a new approach for the determination of attenuation correction factors (ACF) in PET. It was the aim of the present work, to evaluate a scatter correction alogrithm proposed by C. Watson by means of Monte Carlo simulations. Methods: SPT measurements with a Cs-137 point source were simulated for a whole-body PET scanner (ECAT EXACT HR + ) in both the 2D and 3D mode. To examine the scatter fraction (SF) in the transmission data, the detected photons were classified as unscattered or scattered. The simulated data were used to determine (i) the spatial distribution of the SFs, (ii) an ACF sinogram from all detected events (ACF tot ) and (iii) from the unscattered events only (ACF unscattered ), and (iv) an ACF cor =(ACF tot ) 1+Κ sinogram corrected according to the Watson algorithm. In addition, density images were reconstructed in order to quantitatively evaluate linear attenuation coefficients. Results: A high correlation was found between the SF and the ACF tot sinograms. For the cylinder and the EEC phantom, similar correction factors Κ were estimated. The determined values resulted in an accurate scatter correction in both the 2D and 3D mode. (orig.) [de

  9. Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.

    Science.gov (United States)

    Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd

    2018-02-01

    There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.

  10. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  11. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Flat-histogram methods in quantum Monte Carlo simulations: Application to the t-J model

    International Nuclear Information System (INIS)

    Diamantis, Nikolaos G.; Manousakis, Efstratios

    2016-01-01

    We discuss that flat-histogram techniques can be appropriately applied in the sampling of quantum Monte Carlo simulation in order to improve the statistical quality of the results at long imaginary time or low excitation energy. Typical imaginary-time correlation functions calculated in quantum Monte Carlo are subject to exponentially growing errors as the range of imaginary time grows and this smears the information on the low energy excitations. We show that we can extract the low energy physics by modifying the Monte Carlo sampling technique to one in which configurations which contribute to making the histogram of certain quantities flat are promoted. We apply the diagrammatic Monte Carlo (diag-MC) method to the motion of a single hole in the t-J model and we show that the implementation of flat-histogram techniques allows us to calculate the Green's function in a wide range of imaginary-time. In addition, we show that applying the flat-histogram technique alleviates the “sign”-problem associated with the simulation of the single-hole Green's function at long imaginary time. (paper)

  13. Comparing three methods for participatory simulation of hospital work systems

    DEFF Research Database (Denmark)

    Broberg, Ole; Andersen, Simone Nyholm

    Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...

  14. A Ten-Step Design Method for Simulation Games in Logistics Management

    NARCIS (Netherlands)

    Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.

    2011-01-01

    Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a

  15. Numerical simulation of electromagnetic wave propagation using time domain meshless method

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Fujita, Yoshihisa; Itoh, Taku; Nakata, Susumu; Nakamura, Hiroaki; Kamitani, Atsushi

    2012-01-01

    The electromagnetic wave propagation in various shaped wave guide is simulated by using meshless time domain method (MTDM). Generally, Finite Differential Time Domain (FDTD) method is applied for electromagnetic wave propagation simulation. However, the numerical domain should be divided into rectangle meshes if FDTD method is applied for the simulation. On the other hand, the node disposition of MTDM can easily describe the structure of arbitrary shaped wave guide. This is the large advantage of the meshless time domain method. The results of computations show that the damping rate is stably calculated in case with R < 0.03, where R denotes a support radius of the weight function for the shape function. And the results indicate that the support radius R of the weight functions should be selected small, and monomials must be used for calculating the shape functions. (author)

  16. New method of fast simulation for a hadron calorimeter response

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.

    2003-01-01

    In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data

  17. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  18. Correlation of energy balance method to dynamic pipe rupture analysis

    International Nuclear Information System (INIS)

    Kuo, H.H.; Durkee, M.

    1983-01-01

    When using an energy balance approach in the design of pipe rupture restraints for nuclear power plants, the NRC specifies in its Standard Review Plan 3.6.2 that the input energy to the system must be multiplied by a factor of 1.1 unless a lower value can be justified. Since the energy balance method is already quite conservative, an across-the-board use of 1.1 to amplify the energy input appears unneccessary. The paper's purpose is to show that this 'correlation factor' could be substantially less than unity if certain design parameters are met. In this paper, result of nonlinear dynamic analyses were compared to the results of the corresponding analyses based on the energy balance method which assumes constant blowdown forces and rigid plastic material properties. The appropriate correlation factors required to match the energy balance results with the dynamic analyses results were correlated to design parameters such as restraint location from the break, yield strength of the energy absorbing component, and the restraint gap. It is shown that the correlation factor is related to a single nondimensional design parameter and can be limited to a value below unity if appropriate design parameters are chosen. It is also shown that the deformation of the restraints can be related to dimensionless system parameters. This, therefore, allows the maximum restraint deformation to be evaluated directly for design purposes. (orig.)

  19. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  20. Correlating TEM images of damage in irradiated materials to molecular dynamics simulations

    International Nuclear Information System (INIS)

    Schaeublin, R.; Caturla, M.-J.; Wall, M.; Felter, T.; Fluss, M.; Wirth, B.D.; Diaz de la Rubia, T.; Victoria, M.

    2002-01-01

    TEM image simulations are used to couple the results from molecular dynamics (MD) simulations to experimental TEM images. In particular we apply this methodology to the study of defects produced during irradiation. MD simulations have shown that irradiation of FCC metals results in a population of vacancies and interstitials forming clusters. The limitation of these simulations is the short time scales available, on the order of 100 s of picoseconds. Extrapolation of the results from these short times to the time scales of the laboratory has been difficult. We address this problem by two methods: we perform TEM image simulations of MD simulations of cascades with an improved technique, to relate defects produced at short time scales with those observed experimentally at much longer time scales. On the other hand we perform in situ TEM experiments of Au irradiated at liquid-nitrogen temperatures, and study the evolution of the produced damage as the temperature is increased to room temperature. We find that some of the defects observed in the MD simulations at short time scales using the TEM image simulation technique have features that resemble those observed in laboratory TEM images of irradiated samples. In situ TEM shows that stacking fault tetrahedra are present at the lowest temperatures and are stable during annealing up to room temperature, while other defect clusters migrate one dimensionally above -100 deg. C. Results are presented here

  1. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    Science.gov (United States)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  2. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  3. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  4. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  5. Correlation methods in cutting arcs

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)

    2011-05-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  6. Correlation methods in cutting arcs

    International Nuclear Information System (INIS)

    Prevosto, L; Kelly, H

    2011-01-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  7. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  8. Multilevel panel method for wind turbine rotor flow simulations

    NARCIS (Netherlands)

    van Garrel, Arne

    2016-01-01

    Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering

  9. Simulation methods of nuclear electromagnetic pulse effects in integrated circuits

    International Nuclear Information System (INIS)

    Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen

    2013-01-01

    In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)

  10. Overcoming artificial spatial correlations in simulations of superstructure domain growth with parallel Monte Carlo algorithms

    International Nuclear Information System (INIS)

    Schleier, W.; Besold, G.; Heinz, K.

    1992-01-01

    The authors study the applicability of parallelized/vectorized Monte Carlo (MC) algorithms to the simulation of domain growth in two-dimensional lattice gas models undergoing an ordering process after a rapid quench below an order-disorder transition temperature. As examples they consider models with 2 x 1 and c(2 x 2) equilibrium superstructures on the square and rectangular lattices, respectively. They also study the case of phase separation ('1 x 1' islands) on the square lattice. A generalized parallel checkerboard algorithm for Kawasaki dynamics is shown to give rise to artificial spatial correlations in all three models. However, only if superstructure domains evolve do these correlations modify the kinetics by influencing the nucleation process and result in a reduced growth exponent compared to the value from the conventional heat bath algorithm with random single-site updates. In order to overcome these artificial modifications, two MC algorithms with a reduced degree of parallelism ('hybrid' and 'mask' algorithms, respectively) are presented and applied. As the results indicate, these algorithms are suitable for the simulation of superstructure domain growth on parallel/vector computers. 60 refs., 10 figs., 1 tab

  11. Orientational cross correlations between entangled branch polymers in primitive chain network simulations

    Science.gov (United States)

    Masubuchi, Yuichi; Pandey, Ankita; Amamoto, Yoshifumi; Uneyama, Takashi

    2017-11-01

    Although it has not been frequently discussed, contributions of the orientational cross-correlation (OCC) between entangled polymers are not negligible in the relaxation modulus. In the present study, OCC contributions were investigated for 4- and 6-arm star-branched and H-branched polymers by means of multi-chain slip-link simulations. Owing to the molecular-level description of the simulation, the segment orientation was traced separately for each molecule as well as each subchain composing the molecules. Then, the OCC was calculated between different molecules and different subchains. The results revealed that the amount of OCC between different molecules is virtually identical to that of linear polymers regardless of the branching structure. The OCC between constituent subchains of the same molecule is significantly smaller than the OCC between different molecules, although its intensity and time-dependent behavior depend on the branching structure as well as the molecular weight. These results lend support to the single-chain models given that the OCC effects are embedded into the stress-optical coefficient, which is independent of the branching structure.

  12. Simulations of Micro Gas Flows by the DS-BGK Method

    KAUST Repository

    Li, Jun

    2011-01-01

    For gas flows in micro devices, the molecular mean free path is of the same order as the characteristic scale making the Navier-Stokes equation invalid. Recently, some micro gas flows are simulated by the DS-BGK method, which is convergent to the BGK equation and very efficient for low-velocity cases. As the molecular reflection on the boundary is the dominant effect compared to the intermolecular collisions in micro gas flows, the more realistic boundary condition, namely the CLL reflection model, is employed in the DS-BGK simulation and the influence of the accommodation coefficients used in the molecular reflection model on the results are discussed. The simulation results are verified by comparison with those of the DSMC method as criteria. Copyright © 2011 by ASME.

  13. Active method of neutron time correlation coincidence measurement to authenticate mass and enrichment of uranium metal

    International Nuclear Information System (INIS)

    Zhang Songbai; Wu Jun; Zhu Jianyu; Tian Dongfeng; Xie Dong

    2011-01-01

    The active methodology of time correlation coincidence measurement of neutron is an effective verification means to authenticate uranium metal. A collimated 252 Cf neutron source was used to investigate mass and enrichment of uranium metal through the neutron transport simulation for different enrichments and different masses of uranium metal, then time correlation coincidence counts of them were obtained. By analyzing the characteristic of time correlation coincidence counts, the monotone relationships were founded between FWTH of time correlation coincidence and multiplication factor, between the total coincidence counts in FWTH for time correlation coincidence and mass of 235 U multiplied by multiplication factor, and between the ratio of neutron source penetration and mass of uranium metal. Thus the methodology to authenticate mass and enrichment of uranium metal was established with time correlation coincidence by active neutron investigation. (authors)

  14. Quantifying input uncertainty in an assemble-to-order system simulation with correlated input variables of mixed types

    NARCIS (Netherlands)

    Akçay, A.E.; Biller, B.

    2014-01-01

    We consider an assemble-to-order production system where the product demands and the time since the last customer arrival are not independent. The simulation of this system requires a multivariate input model that generates random input vectors with correlated discrete and continuous components. In

  15. Lipid clustering correlates with membrane curvature as revealed by molecular simulations of complex lipid bilayers.

    Directory of Open Access Journals (Sweden)

    Heidi Koldsø

    2014-10-01

    Full Text Available Cell membranes are complex multicomponent systems, which are highly heterogeneous in the lipid distribution and composition. To date, most molecular simulations have focussed on relatively simple lipid compositions, helping to inform our understanding of in vitro experimental studies. Here we describe on simulations of complex asymmetric plasma membrane model, which contains seven different lipids species including the glycolipid GM3 in the outer leaflet and the anionic lipid, phosphatidylinositol 4,5-bisphophate (PIP2, in the inner leaflet. Plasma membrane models consisting of 1500 lipids and resembling the in vivo composition were constructed and simulations were run for 5 µs. In these simulations the most striking feature was the formation of nano-clusters of GM3 within the outer leaflet. In simulations of protein interactions within a plasma membrane model, GM3, PIP2, and cholesterol all formed favorable interactions with the model α-helical protein. A larger scale simulation of a model plasma membrane containing 6000 lipid molecules revealed correlations between curvature of the bilayer surface and clustering of lipid molecules. In particular, the concave (when viewed from the extracellular side regions of the bilayer surface were locally enriched in GM3. In summary, these simulations explore the nanoscale dynamics of model bilayers which mimic the in vivo lipid composition of mammalian plasma membranes, revealing emergent nanoscale membrane organization which may be coupled both to fluctuations in local membrane geometry and to interactions with proteins.

  16. Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations

    DEFF Research Database (Denmark)

    Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht

    2011-01-01

    Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... region method reduces the computation time mainly by skipping stability analysis for a large portion of compositions in the single phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be employed with initial estimates from the previous step. The CSAT method saves...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...

  17. Some recent developments of the immersed interface method for flow simulation

    Science.gov (United States)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  18. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    Science.gov (United States)

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  19. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  20. Flow simulation of a Pelton bucket using finite volume particle method

    International Nuclear Information System (INIS)

    Vessaz, C; Jahanbakhsh, E; Avellan, F

    2014-01-01

    The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets

  1. A fast mollified impulse method for biomolecular atomistic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)

    2017-03-15

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  2. Correlation Between Arthroscopy Simulator and Video Game Performance: A Cross-Sectional Study of 30 Volunteers Comparing 2- and 3-Dimensional Video Games.

    Science.gov (United States)

    Jentzsch, Thorsten; Rahm, Stefan; Seifert, Burkhardt; Farei-Campagna, Jan; Werner, Clément M L; Bouaicha, Samy

    2016-07-01

    To investigate the association between arthroscopy simulator performance and video game skills. This study compared the performances of 30 volunteers without experience performing arthroscopies in 3 different tasks of a validated virtual reality knee arthroscopy simulator with the video game experience using a questionnaire and actual performances in 5 different 2- and 3-dimensional (D) video games of varying genres on 2 different platforms. Positive correlations between knee arthroscopy simulator and video game performances (ρ = 0.63, P video game skills, they show a correlation with 2-D tile-matching puzzle games only for easier tasks with a rather limited focus, and highly correlate with 3-D sports and first-person shooter video games. These findings show that experienced and good 3-D gamers are better arthroscopists than nonexperienced and poor 3-D gamers. Level II, observational cross-sectional study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  3. Research of Monte Carlo method used in simulation of different maintenance processes

    International Nuclear Information System (INIS)

    Zhao Siqiao; Liu Jingquan

    2011-01-01

    The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)

  4. Application of the maximum entropy method to dynamical fermion simulations

    Science.gov (United States)

    Clowser, Jonathan

    This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.

  5. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application

    Directory of Open Access Journals (Sweden)

    Monica M. Vasquez

    2016-11-01

    Full Text Available Abstract Background The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. Methods A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD, specifically the sample size (N = 1000 for total population, 500 for sub-analyses, correlation of biomarkers (0.20, 0.50, 0.80, prevalence of overweight (40% and obese (12% outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05–1.75. Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Results Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD

  6. RELAP-7 Closure Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Berry, R. A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, R. C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, H. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hansel, J. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sharpe, J. P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Johns, Russell C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 code utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.

  7. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  8. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  9. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    Science.gov (United States)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  10. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  11. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  12. Uncertainty management in stratigraphic well correlation and stratigraphic architectures: A training-based method

    Science.gov (United States)

    Edwards, Jonathan; Lallier, Florent; Caumon, Guillaume; Carpentier, Cédric

    2018-02-01

    We discuss the sampling and the volumetric impact of stratigraphic correlation uncertainties in basins and reservoirs. From an input set of wells, we evaluate the probability for two stratigraphic units to be associated using an analog stratigraphic model. In the presence of multiple wells, this method sequentially updates a stratigraphic column defining the stratigraphic layering for each possible set of realizations. The resulting correlations are then used to create stratigraphic grids in three dimensions. We apply this method on a set of synthetic wells sampling a forward stratigraphic model built with Dionisos. To perform cross-validation of the method, we introduce a distance comparing the relative geological time of two models for each geographic position, and we compare the models in terms of volumes. Results show the ability of the method to automatically generate stratigraphic correlation scenarios, and also highlight some challenges when sampling stratigraphic uncertainties from multiple wells.

  13. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  14. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    Science.gov (United States)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  15. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  16. An improved method for simulating radiographs

    International Nuclear Information System (INIS)

    Laguna, G.W.

    1986-01-01

    The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials

  17. Two-baryon systems from HAL QCD method and the mirage in the temporal correlation of the direct method

    Science.gov (United States)

    Iritani, Takumi

    2018-03-01

    Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the "mirage problem," while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the Lüscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.

  18. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  19. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    Science.gov (United States)

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  20. Reduction methods and uncertainty analysis: application to a Chemistry-Transport Model for modeling and simulation of impacts

    International Nuclear Information System (INIS)

    Boutahar, Jaouad

    2004-01-01

    In an integrated impact assessment, one has to test several scenarios of the model inputs or/and to identify the effects of model input uncertainties on the model outputs. In both cases, a large number of simulations of the model is necessary. That of course is not feasible with comprehensive Chemistry-Transport Model, due to the need for huge CPU times. Two approaches may be used in order to circumvent these difficulties: The first approach consists in reducing the computational cost of the original model by building a reduced model. Two reduction techniques are used: the first method, POD, is related to the statistical behaviour of the system and is based on a proper orthogonal decomposition of the solutions. The second method, is an efficient representation of the input/output behaviour through look-up tables. It describes the output model as an expansion of finite hierarchical correlated function in terms of the input variables. The second approach is based on reducing the number of models runs required by the standard Monte Carlo methods. It characterizes the probabilistic response of the uncertain model output as an expansion of orthogonal polynomials according to model inputs uncertainties. Then the classical Monte Carlo simulation can easily be used to compute the probability density of the uncertain output. Another key point in an integrated impact assessment is to develop strategies for the reduction of emissions by computing Source/Receptor matrices for several years of simulations. We proposed here an efficient method to calculate these matrices by using the adjoint model and in particular by defining the 'representative chemical day'. All of these methods are applied to POLAIR3D, a Chemistry-Transport model developed in this thesis. (author) [fr

  1. Connecting single-stock assessment models through correlated survival

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2017-01-01

    times. We propose a simple alternative. In three case studies each with two stocks, we improve the single-stock models, as measured by Akaike information criterion, by adding correlation in the cohort survival. To limit the number of parameters, the correlations are parameterized through...... the corresponding partial correlations. We consider six models where the partial correlation matrix between stocks follows a band structure ranging from independent assessments to complex correlation structures. Further, a simulation study illustrates the importance of handling correlated data sufficiently...... by investigating the coverage of confidence intervals for estimated fishing mortality. The results presented will allow managers to evaluate stock statuses based on a more accurate evaluation of model output uncertainty. The methods are directly implementable for stocks with an analytical assessment and do...

  2. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  3. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Chen, Zhao; Halonen, Marilyn; Guerra, Stefano

    2016-11-14

    The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO) is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD), specifically the sample size (N = 1000 for total population, 500 for sub-analyses), correlation of biomarkers (0.20, 0.50, 0.80), prevalence of overweight (40%) and obese (12%) outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05-1.75). Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD14, Complement 3, C-reactive protein, Ferritin

  4. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  5. Improved nonparametric inference for multiple correlated periodic sequences

    KAUST Repository

    Sun, Ying

    2013-08-26

    This paper proposes a cross-validation method for estimating the period as well as the values of multiple correlated periodic sequences when data are observed at evenly spaced time points. The period of interest is estimated conditional on the other correlated sequences. An alternative method for period estimation based on Akaike\\'s information criterion is also discussed. The improvement of the period estimation performance is investigated both theoretically and by simulation. We apply the multivariate cross-validation method to the temperature data obtained from multiple ice cores, investigating the periodicity of the El Niño effect. Our methodology is also illustrated by estimating patients\\' cardiac cycle from different physiological signals, including arterial blood pressure, electrocardiography, and fingertip plethysmograph.

  6. Bragg's Law diffraction simulations for electron backscatter diffraction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kacher, Josh, E-mail: jkacherbyu@gmail.com [Department of Mechanical Engineering, Brigham Young University, 455B Crabtree Technology Building, Provo, UT 84602 (United States); Landon, Colin; Adams, Brent L.; Fullwood, David [Department of Mechanical Engineering, Brigham Young University, 455B Crabtree Technology Building, Provo, UT 84602 (United States)

    2009-08-15

    In 2006, Angus Wilkinson introduced a cross-correlation-based electron backscatter diffraction (EBSD) texture analysis system capable of measuring lattice rotations and elastic strains to high resolution. A variation of the cross-correlation method is introduced using Bragg's Law-based simulated EBSD patterns as strain free reference patterns that facilitates the use of the cross-correlation method with polycrystalline materials. The lattice state is found by comparing simulated patterns to collected patterns at a number of regions on the pattern using the cross-correlation function and calculating the deformation from the measured shifts of each region. A new pattern can be simulated at the deformed state, and the process can be iterated a number of times to converge on the absolute lattice state. By analyzing an iteratively rotated single crystal silicon sample and recovering the rotation, this method is shown to have an angular resolution of {approx}0.04{sup o} and an elastic strain resolution of {approx}7e-4. As an example of applications, elastic strain and curvature measurements are used to estimate the dislocation density in a single grain of a compressed polycrystalline Mg-based AZ91 alloy.

  7. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz

    2018-01-01

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  8. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid

    2018-05-17

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  9. Simulation of crystalline pattern formation by the MPFC method

    Directory of Open Access Journals (Sweden)

    Starodumov Ilya

    2017-01-01

    Full Text Available The Phase Field Crystal model in hyperbolic formulation (modified PFC or MPFC, is investigated as one of the most promising techniques for modeling the formation of crystal patterns. MPFC is a convenient and fundamentally based description linking nano-and meso-scale processes in the evolution of crystal structures. The presented model is a powerful tool for mathematical modeling of the various operations in manufacturing. Among them is the definition of process conditions for the production of metal castings with predetermined properties, the prediction of defects in the crystal structure during casting, the evaluation of quality of special coatings, and others. Our paper presents the structure diagram which was calculated for the one-mode MPFC model and compared to the results of numerical simulation for the fast phase transitions. The diagram is verified by the numerical simulation and also strongly correlates to the previously calculated diagrams. The computations have been performed using software based on the effective parallel computational algorithm.

  10. Application of the spectral-correlation method for diagnostics of cellulose paper

    Science.gov (United States)

    Kiesewetter, D.; Malyugin, V.; Reznik, A.; Yudin, A.; Zhuravleva, N.

    2017-11-01

    The spectral-correlation method was described for diagnostics of optically inhomogeneous biological objects and materials of natural origin. The interrelation between parameters of the studied objects and parameters of the cross correlation function of speckle patterns produced by scattering of coherent light at different wavelengths is shown for thickness, optical density and internal structure of the material. A detailed study was performed for cellulose electric insulating paper with different parameters.

  11. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Methods and models for accelerating dynamic simulation of fluid power circuits

    Energy Technology Data Exchange (ETDEWEB)

    Aaman, R.

    2011-07-01

    The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, two mechanisms which make the system stiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation

  13. Simulation of Jetting in Injection Molding Using a Finite Volume Method

    Directory of Open Access Journals (Sweden)

    Shaozhen Hua

    2016-05-01

    Full Text Available In order to predict the jetting and the subsequent buckling flow more accurately, a three dimensional melt flow model was established on a viscous, incompressible, and non-isothermal fluid, and a control volume-based finite volume method was employed to discretize the governing equations. A two-fold iterative method was proposed to decouple the dependence among pressure, velocity, and temperature so as to reduce the computation and improve the numerical stability. Based on the proposed theoretical model and numerical method, a program code was developed to simulate melt front progress and flow fields. The numerical simulations for different injection speeds, melt temperatures, and gate locations were carried out to explore the jetting mechanism. The results indicate the filling pattern depends on the competition between inertial and viscous forces. When inertial force exceeds the viscous force jetting occurs, then it changes to a buckling flow as the viscous force competes over the inertial force. Once the melt contacts with the mold wall, the melt filling switches to conventional sequential filling mode. Numerical results also indicate jetting length increases with injection speed but changes little with melt temperature. The reasonable agreements between simulated and experimental jetting length and buckling frequency imply the proposed method is valid for jetting simulation.

  14. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  15. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    Science.gov (United States)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  16. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  17. Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Szieberth, M.; Kloosterman, J.L.

    2014-01-01

    Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible

  18. Detection of circuit-board components with an adaptive multiclass correlation filter

    Science.gov (United States)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  19. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C. [Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, 400 Trabalhador São-Carlense Avenue, São Carlos 13566-590 (Brazil); Bakic, Predrag R.; Maidment, Andrew D. A. [Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, 3400 Spruce Street, Philadelphia, Pennsylvania 19104 (United States)

    2016-06-15

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  20. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    International Nuclear Information System (INIS)

    Borges, Lucas R.; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C.; Bakic, Predrag R.; Maidment, Andrew D. A.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  1. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  2. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    International Nuclear Information System (INIS)

    Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-01-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with

  3. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  4. To improve training methods in an engine room simulator-based training

    OpenAIRE

    Lin, Chingshin

    2016-01-01

    The simulator based training are used widely in both industry and school education to reduce the accidents nowadays. This study aims to suggest the improved training methods to increase the effectiveness of engine room simulator training. The effectiveness of training in engine room will be performance indicators and the self-evaluation by participants. In the first phase of observation, the aim is to find out the possible shortcomings of current training methods based on train...

  5. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  6. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  7. Detecting overlapping community structure of networks based on vertex–vertex correlations

    International Nuclear Information System (INIS)

    Zarei, Mina; Izadi, Dena; Samani, Keivan Aghababaei

    2009-01-01

    Using the NMF (non-negative matrix factorization) method, the structure of overlapping communities in complex networks is investigated. For the feature matrix of the NMF method we introduce a vertex–vertex correlation matrix. The method is applied to some computer-generated and real-world networks. Simulations show that this feature matrix gives more reasonable results

  8. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    Science.gov (United States)

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  9. A heuristic method for simulating open-data of arbitrary complexity that can be used to compare and evaluate machine learning methods.

    Science.gov (United States)

    Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S

    2018-01-01

    A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.

  10. Electronic correlation studies. III. Self-correlated field method. Application to 2S ground state and 2P excited state of three-electron atomic systems

    International Nuclear Information System (INIS)

    Lissillour, R.; Guerillot, C.R.

    1975-01-01

    The self-correlated field method is based on the insertion in the group product wave function of pair functions built upon a set of correlated ''local'' functions and of ''nonlocal'' functions. This work is an application to three-electron systems. The effects of the outer electron on the inner pair are studied. The total electronic energy and some intermediary results such as pair energies, Coulomb and exchange ''correlated'' integrals, are given. The results are always better than those given by conventional SCF computations and reach the same level of accuracy as those given by more laborious methods used in correlation studies. (auth)

  11. Meshfree simulation of avalanches with the Finite Pointset Method (FPM)

    Science.gov (United States)

    Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios

    2017-04-01

    Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.

  12. Simulation methods to estimate design power: an overview for applied research.

    Science.gov (United States)

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  13. Viscosity of dilute suspensions of rodlike particles: A numerical simulation method

    Science.gov (United States)

    Yamamoto, Satoru; Matsuoka, Takaaki

    1994-02-01

    The recently developed simulation method, named as the particle simulation method (PSM), is extended to predict the viscosity of dilute suspensions of rodlike particles. In this method a rodlike particle is modeled by bonded spheres. Each bond has three types of springs for stretching, bending, and twisting deformation. The rod model can therefore deform by changing the bond distance, bond angle, and torsion angle between paired spheres. The rod model can represent a variety of rigidity by modifying the bond parameters related to Young's modulus and the shear modulus of the real particle. The time evolution of each constituent sphere of the rod model is followed by molecular-dynamics-type approach. The intrinsic viscosity of a suspension of rodlike particles is derived from calculating an increased energy dissipation for each sphere of the rod model in a viscous fluid. With and without deformation of the particle, the motion of the rodlike particle was numerically simulated in a three-dimensional simple shear flow at a low particle Reynolds number and without Brownian motion of particles. The intrinsic viscosity of the suspension of rodlike particles was investigated on orientation angle, rotation orbit, deformation, and aspect ratio of the particle. For the rigid rodlike particle, the simulated rotation orbit compared extremely well with theoretical one which was obtained for a rigid ellipsoidal particle by use of Jeffery's equation. The simulated dependence of the intrinsic viscosity on various factors was also identical with that of theories for suspensions of rigid rodlike particles. For the flexible rodlike particle, the rotation orbit could be obtained by the particle simulation method and it was also cleared that the intrinsic viscosity decreased as occurring of recoverable deformation of the rodlike particle induced by flow.

  14. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    Science.gov (United States)

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  15. Statistics corner: A guide to appropriate use of correlation coefficient in medical research.

    Science.gov (United States)

    Mukaka, M M

    2012-09-01

    Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided.

  16. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  17. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    Science.gov (United States)

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  18. A Modified SPH Method for Dynamic Failure Simulation of Heterogeneous Material

    Directory of Open Access Journals (Sweden)

    G. W. Ma

    2014-01-01

    Full Text Available A modified smoothed particle hydrodynamics (SPH method is applied to simulate the failure process of heterogeneous materials. An elastoplastic damage model based on an extension form of the unified twin shear strength (UTSS criterion is adopted. Polycrystalline modeling is introduced to generate the artificial microstructure of specimen for the dynamic simulation of Brazilian splitting test and uniaxial compression test. The strain rate effect on the predicted dynamic tensile and compressive strength is discussed. The final failure patterns and the dynamic strength increments demonstrate good agreements with experimental results. It is illustrated that the polycrystalline modeling approach combined with the SPH method is promising to simulate more complex failure process of heterogeneous materials.

  19. Irreducible Greens' Functions method in the theory of highly correlated systems

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1994-09-01

    The self-consistent theory of the correlation effects in Highly Correlated Systems (HCS) is presented. The novel Irreducible Green's Function (IGF) method is discussed in detail for the Hubbard model and random Hubbard model. The interpolation solution for the quasiparticle spectrum, which is valid for both the atomic and band limit is obtained. The (IGF) method permits to calculate the quasiparticle spectra of many-particle systems with the complicated spectra and strong interaction in a very natural and compact way. The essence of the method deeply related to the notion of the Generalized Mean Fields (GMF), which determine the elastic scattering corrections. The inelastic scattering corrections leads to the damping of the quasiparticles and are the main topic of the present consideration. The calculation of the damping has been done in a self-consistent way for both limits. For the random Hubbard model the weak coupling case has been considered and the self-energy operator has been calculated using the combination of the IGF method and Coherent Potential Approximation (CPA). The other applications of the method to the s-f model, Anderson model, Heisenberg antiferromagnet, electron-phonon interaction models and quasiparticle tunneling are discussed briefly. (author). 79 refs

  20. Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods

    Directory of Open Access Journals (Sweden)

    Oleci Pereira Frota

    Full Text Available ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D of high-touch clinical surfaces (HTCS in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity and stains were detected in visual inspection; when ≥2.5 colony forming units per cm2 were found in culture; when ≥5 relative light units per cm2 were found at the ATP-bioluminescence assay. Results: 720 analyses were performed, 240 per method. The overall rates of clean surfaces per visual inspection, culture and ATP-bioluminescence assay were 8.3%, 20.8% and 44.2% before C&D, and 92.5%, 50% and 84.2% after C&D, respectively (p<0.001. There were only occasional statistically significant relationships between methods. Conclusion: the methods did not present a good correlation, neither quantitative nor qualitatively.

  1. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    Science.gov (United States)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  2. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke; Yokota, Rio; Koyama, Hiroshi; Morimoto, Gentaro; Hasegawa, Aki; Masumoto, Gen; Okimoto, Noriaki; Hirano, Yoshinori; Ibeid, Huda; Narumi, Tetsu; Taiji, Makoto

    2014-01-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  3. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke

    2014-10-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  4. A method of simulating and visualizing nuclear reactions

    International Nuclear Information System (INIS)

    Atwood, C.H.; Paul, K.M.

    1994-01-01

    Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown

  5. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  6. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  7. Innovative teaching methods in the professional training of nurses – simulation education

    Directory of Open Access Journals (Sweden)

    Michaela Miertová

    2013-12-01

    Full Text Available Introduction: The article is aimed to highlight usage of innovative teaching methods within simulation education in the professional training of nurses abroad and to present our experience based on passing intensive study programme at School of Nursing, Midwifery and Social Work, University of Salford (United Kingdom, UK within Intensive EU Lifelong Learning Programme (LPP Erasmus EU RADAR 2013. Methods: Implementation of simulation methods such as role-play, case studies, simulation scenarios, practical workshops and clinical skills workstation within structured ABCDE approach (AIM© Assessment and Management Tool was aimed to promote the development of theoretical knowledge and skills to recognize and manage acutely deteriorated patients. Structured SBAR approach (Acute SBAR Communication Tool was used for the training of communication and information sharing among the members of multidisciplinary health care team. OSCE approach (Objective Structured Clinical Examination was used for student’s individual formative assessment. Results: Simulation education is proved to have lots of benefits in the professional training of nurses. It is held in safe, controlled and realistic conditions (in simulation laboratories reflecting real hospital and community care environment with no risk of harming real patients accompanied by debriefing, discussion and analysis of all activities students have performed within simulated scenario. Such learning environment is supportive, challenging, constructive, motivated, engaging, skilled, flexible, inspiring and respectful. Thus the simulation education is effective, interactive, interesting, efficient and modern way of nursing education. Conclusion: Critical thinking and clinical competences of nurses are crucial for early recognition and appropriate response to acute deterioration of patient’s condition. These competences are important to ensure the provision of high quality nursing care. Methods of

  8. Lattice Boltzmann method used to simulate particle motion in a conduit

    Directory of Open Access Journals (Sweden)

    Dolanský Jindřich

    2017-06-01

    Full Text Available A three-dimensional numerical simulation of particle motion in a pipe with a rough bed is presented. The simulation based on the Lattice Boltzmann Method (LBM employs the hybrid diffuse bounce-back approach to model moving boundaries. The bed of the pipe is formed by stationary spherical particles of the same size as the moving particles. Particle movements are induced by gravitational and hydrodynamic forces. To evaluate the hydrodynamic forces, the Momentum Exchange Algorithm is used. The LBM unified computational frame makes it possible to simulate both the particle motion and the fluid flow and to study mutual interactions of the carrier liquid flow and particles and the particle–bed and particle–particle collisions. The trajectories of simulated and experimental particles are compared. The Particle Tracking method is used to track particle motion. The correctness of the applied approach is assessed.

  9. Numerical simulation methods for wave propagation through optical waveguides

    International Nuclear Information System (INIS)

    Sharma, A.

    1993-01-01

    The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs

  10. Study on simulation methods of atrium building cooling load in hot and humid regions

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)

    2010-10-15

    In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)

  11. A novel method for energy harvesting simulation based on scenario generation

    Science.gov (United States)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  12. Restoring method for missing data of spatial structural stress monitoring based on correlation

    Science.gov (United States)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  13. Partial distance correlation with methods for dissimilarities

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2014-01-01

    Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...

  14. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Science.gov (United States)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  15. Development of K-Basin High-Strength Homogeneous Sludge Simulants and Correlations Between Unconfined Compressive Strength and Shear Strength

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Yasuo; Baer, Ellen BK; Chun, Jaehun; Yokuda, Satoru T.; Schmidt, Andrew J.; Sande, Susan; Buchmiller, William C.

    2011-02-20

    potential for erosion, it is important to compare the measured shear strength to penetrometer measurements and to develop a correlation (or correlations) between UCS measured by a pocket penetrometer and direct shear strength measurements for various homogeneous and heterogeneous simulants. This study developed 11 homogeneous simulants, whose shear strengths vary from 4 to 170 kPa. With these simulants, we developed correlations between UCS measured by a Geotest E-280 pocket penetrometer and shear strength values measured by a Geonor H-60 hand-held vane tester and a more sophisticated bench-top unit, the Haake M5 rheometer. This was achieved with side-by-side measurements of the shear strength and UCS of the homogeneous simulants. The homogeneous simulants developed under this study consist of kaolin clay, plaster of Paris, and amorphous alumina CP-5 with water. The simulants also include modeling clay. The shear strength of most of these simulants is sensitive to various factors, including the simulant size, the intensity of mixing, and the curing time, even with given concentrations of simulant components. Table S.1 summarizes these 11 simulants and their shear strengths.

  16. Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation

    International Nuclear Information System (INIS)

    Beekman, F.J.

    1999-01-01

    Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)

  17. A Comparative Study on the Refueling Simulation Method for a CANDU Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Do, Quang Binh; Choi, Hang Bok; Roh, Gyu Hong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The Canada deuterium uranium (CANDU) reactor calculation is typically performed by the RFSP code to obtain the power distribution upon a refueling. In order to assess the equilibrium behavior of the CANDU reactor, a few methods were suggested for a selection of the refueling channel. For example, an automatic refueling channel selection method (AUTOREFUEL) and a deterministic method (GENOVA) were developed, which were based on a reactor's operation experience and the generalized perturbation theory, respectively. Both programs were designed to keep the zone controller unit (ZCU) water level within a reasonable range during a continuous refueling simulation. However, a global optimization of the refueling simulation, that includes constraints on the discharge burn-up, maximum channel power (MCP), maximum bundle power (MBP), channel power peaking factor (CPPF) and the ZCU water level, was not achieved. In this study, an evolutionary algorithm, which is indeed a hybrid method based on the genetic algorithm, the elitism strategy and the heuristic rules for a multi-cycle and multi-objective optimization of the refueling simulation has been developed for the CANDU reactor. This paper presents the optimization model of the genetic algorithm and compares the results with those obtained by other simulation methods.

  18. A novel normalization method based on principal component analysis to reduce the effect of peak overlaps in two-dimensional correlation spectroscopy

    Science.gov (United States)

    Wang, Yanwei; Gao, Wenying; Wang, Xiaogong; Yu, Zhiwu

    2008-07-01

    Two-dimensional correlation spectroscopy (2D-COS) has been widely used to separate overlapped spectroscopic bands. However, band overlap may sometimes cause misleading results in the 2D-COS spectra, especially if one peak is embedded within another peak by the overlap. In this work, we propose a new normalization method, based on principal component analysis (PCA). For each spectrum under discussion, the first principal component of PCA is simply taken as the normalization factor of the spectrum. It is demonstrated that the method works well with simulated dynamic spectra. Successful result has also been obtained from the analysis of an overlapped band in the wavenumber range 1440-1486 cm -1 for the evaporation process of a solution containing behenic acid, methanol, and chloroform.

  19. Rapid simulation of spatial epidemics: a spectral method.

    Science.gov (United States)

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  1. A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  2. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)

  3. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges

  4. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    Science.gov (United States)

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Technical note: Comparison of metal-on-metal hip simulator wear measured by gravimetric, CMM and optical profiling methods

    Science.gov (United States)

    Alberts, L. Russell; Martinez-Nogues, Vanesa; Baker Cook, Richard; Maul, Christian; Bills, Paul; Racasan, R.; Stolz, Martin; Wood, Robert J. K.

    2018-03-01

    Simulation of wear in artificial joint implants is critical for evaluating implant designs and materials. Traditional protocols employ the gravimetric method to determine the loss of material by measuring the weight of the implant components before and after various test intervals and after the completed test. However, the gravimetric method cannot identify the location, area coverage or maximum depth of the wear and it has difficulties with proportionally small weight changes in relatively heavy implants. In this study, we compare the gravimetric method with two geometric surface methods; an optical light method (RedLux) and a coordinate measuring method (CMM). We tested ten Adept hips in a simulator for 2 million cycles (MC). Gravimetric and optical methods were performed at 0.33, 0.66, 1.00, 1.33 and 2 MC. CMM measurements were done before and after the test. A high correlation was found between the gravimetric and optical methods for both heads (R 2  =  0.997) and for cups (R 2  =  0.96). Both geometric methods (optical and CMM) measured more volume loss than the gravimetric method (for the heads, p  =  0.004 (optical) and p  =  0.08 (CMM); for the cups p  =  0.01 (optical) and p  =  0.003 (CMM)). Two cups recorded negative wear at 2 MC by the gravimetric method but none did by either the optical method or by CMM. The geometric methods were prone to confounding factors such as surface deformation and the gravimetric method could be confounded by protein absorption and backside wear. Both of the geometric methods were able to show the location, area covered and depth of the wear on the bearing surfaces, and track their changes during the test run; providing significant advantages to solely using the gravimetric method.

  6. 3D simulation of friction stir welding based on movable cellular automaton method

    Science.gov (United States)

    Eremina, Galina M.

    2017-12-01

    The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.

  7. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    Science.gov (United States)

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise

  8. A direct simulation method for flows with suspended paramagnetic particles

    NARCIS (Netherlands)

    Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.

    2008-01-01

    A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a

  9. Simulating colloid hydrodynamics with lattice Boltzmann methods

    International Nuclear Information System (INIS)

    Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J

    2004-01-01

    We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy

  10. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  11. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  12. Evaluation of an improved method of simulating lung nodules in chest tomosynthesis

    International Nuclear Information System (INIS)

    Svalkvist, Angelica; Allansdotter Johnsson, Aase; Vikgren, Jenny

    2012-01-01

    Background Simulated pathology is a valuable complement to clinical images in studies aiming at evaluating an imaging technique. In order for a study using simulated pathology to be valid, it is important that the simulated pathology in a realistic way reflect the characteristics of real pathology. Purpose To perform a thorough evaluation of a nodule simulation method for chest tomosynthesis, comparing the detection rate and appearance of the artificial nodules with those of real nodules in an observer performance experiment. Material and Methods A cohort consisting of 64 patients, 38 patients with a total of 129 identified pulmonary nodules and 26 patients without identified pulmonary nodules, was used in the study. Simulated nodules, matching the real clinically found pulmonary nodules by size, attenuation, and location, were created and randomly inserted into the tomosynthesis section images of the patients. Three thoracic radiologists and one radiology resident reviewed the images in an observer performance study divided into two parts. The first part included nodule detection and the second part included rating of the visual appearance of the nodules. The results were evaluated using a modified receiver-operating characteristic (ROC) analysis. Results The sensitivities for real and simulated nodules were comparable, as the area under the modified ROC curve (AUC) was close to 0.5 for all observers (range, 0.43-0.55). Even though the ratings of visual appearance for real and simulated nodules overlapped considerably, the statistical analysis revealed that the observers to were able to separate simulated nodules from real nodules (AUC values range 0.70-0.74). Conclusion The simulation method can be used to create artificial lung nodules that have similar detectability as real nodules in chest tomosynthesis, although experienced thoracic radiologists may be able to distinguish them from real nodules

  13. Amyloid oligomer structure characterization from simulations: A general method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  14. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  15. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    Science.gov (United States)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  16. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  17. A method of simulating intensity modulation-direct detection WDM systems

    Institute of Scientific and Technical Information of China (English)

    HUANG Jing; YAO Jian-quan; LI En-bang

    2005-01-01

    In the simulation of Intensity Modulation-Direct Detection WDM Systems,when the dispersion and nonlinear effects play equally important roles,the intensity fluctuation caused by cross-phase modulation may be overestimated as a result of the improper step size.Therefore,the step size in numerical simulation should be selected to suppress false XPM intensity modulation (keep it much less than signal power).According to this criterion,the step size is variable along the fiber.For a WDM system,the step size depends on the channel separation.Different type of transmission fiber has different step size.In the split-step Fourier method,this criterion can reduce simulation time,and when the step size is bigger than 100 meters,the simulation accuracy can also be improved.

  18. Simulating Interface Growth and Defect Generation in CZT – Simulation State of the Art and Known Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Henager, Charles H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Fei [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lin, Guang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bylaska, Eric J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zabaras, Nicholas [Cornell Univ., Ithaca, NY (United States)

    2012-11-01

    This one-year, study topic project will survey and investigate the known state-of-the-art of modeling and simulation methods suitable for performing fine-scale, fully 3-D modeling, of the growth of CZT crystals at the melt-solid interface, and correlating physical growth and post-growth conditions with generation and incorporation of defects into the solid CZT crystal. In the course of this study, this project will also identify the critical gaps in our knowledge of modeling and simulation techniques in terms of what would be needed to be developed in order to perform accurate physical simulations of defect generation in melt-grown CZT. The transformational nature of this study will be, for the first time, an investigation of modeling and simulation methods for describing microstructural evolution during crystal growth and the identification of the critical gaps in our knowledge of such methods, which is recognized as having tremendous scientific impacts for future model developments in a wide variety of materials science areas.

  19. Set simulation of a turbulent arc by Monte-Carlo method

    International Nuclear Information System (INIS)

    Zhukov, M.F.; Devyatov, B.N.; Nazaruk, V.I.

    1982-01-01

    A method of simulation of turbulent arc fluctuations is suggested which is based on the probabilistic set description of conducting channel displacements over the plane not nodes with taking into account the turbulent eddies causing non-uniformity of the field of displacements. The problem is treated in terms of the random set theory. Methods to control the displacements by varying the local displacement sets are described. A local-set approach in the turbulent arc simulation is used for a statistical study of the arc form evolution in a turbulent gas flow. The method implies the performance of numerical experiments on a computer. Various ways to solve the problem of control of the geometric form of an arc column on a model are described. Under consideration are the problems of organization of physical experiments to obtain the required information for the identification of local sets. The suggested method of the application of mathematical experiments is associated with the principles of an operational game. (author)

  20. Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods

    Directory of Open Access Journals (Sweden)

    L. Brancik

    2011-04-01

    Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.

  1. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  2. Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes

    International Nuclear Information System (INIS)

    Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P

    2007-01-01

    Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each

  3. A hybrid measure-correlate-predict method for long-term wind condition assessment

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias

    2014-01-01

    Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze

  4. New method of processing heat treatment experiments with numerical simulation support

    Science.gov (United States)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  5. A simple method for potential flow simulation of cascades

    Indian Academy of Sciences (India)

    vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.

  6. Secure optical verification using dual phase-only correlation

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Zhang, Yan; Xie, Zhenwei; Liu, Zhengjun

    2015-01-01

    We introduce a security-enhanced optical verification system using dual phase-only correlation based on a novel correlation algorithm. By employing a nonlinear encoding, the inherent locks of the verification system are obtained in real-valued random distributions, and the identity keys assigned to authorized users are designed as pure phases. The verification process is implemented in two-step correlation, so only authorized identity keys can output the discriminate auto-correlation and cross-correlation signals that satisfy the reset threshold values. Compared with the traditional phase-only-correlation-based verification systems, a higher security level against counterfeiting and collisions are obtained, which is demonstrated by cryptanalysis using known attacks, such as the known-plaintext attack and the chosen-plaintext attack. Optical experiments as well as necessary numerical simulations are carried out to support the proposed verification method. (paper)

  7. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  8. Application of a Perturbation Method for Realistic Dynamic Simulation of Industrial Robots

    International Nuclear Information System (INIS)

    Waiboer, R. R.; Aarts, R. G. K. M.; Jonker, J. B.

    2005-01-01

    This paper presents the application of a perturbation method for the closed-loop dynamic simulation of a rigid-link manipulator with joint friction. In this method the perturbed motion of the manipulator is modelled as a first-order perturbation of the nominal manipulator motion. A non-linear finite element method is used to formulate the dynamic equations of the manipulator mechanism. In a closed-loop simulation the driving torques are generated by the control system. Friction torques at the actuator joints are introduced at the stage of perturbed dynamics. For a mathematical model of the friction torques we implemented the LuGre friction model that accounts both for the sliding and pre-sliding regime. To illustrate the method, the motion of a six-axes industrial Staeubli robot is simulated. The manipulation task implies transferring a laser spot along a straight line with a trapezoidal velocity profile. The computed trajectory tracking errors are compared with measured values, where in both cases the tip position is computed from the joint angles using a nominal kinematic robot model. It is found that a closed-loop simulation using a non-linear finite element model of this robot is very time-consuming due to the small time step of the discrete controller. Using the perturbation method with the linearised model a substantial reduction of the computer time is achieved without loss of accuracy

  9. Coordinate transformation based cryo-correlative methods for electron tomography and focused ion beam milling

    International Nuclear Information System (INIS)

    Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan

    2014-01-01

    Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography

  10. Mathematical correlation of modal-parameter-identification methods via system-realization theory

    Science.gov (United States)

    Juang, Jer-Nan

    1987-01-01

    A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.

  11. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  12. A general method for closed-loop inverse simulation of helicopter maneuver flight

    Directory of Open Access Journals (Sweden)

    Wei WU

    2017-12-01

    Full Text Available Maneuverability is a key factor to determine whether a helicopter could finish certain flight missions successfully or not. Inverse simulation is commonly used to calculate the pilot controls of a helicopter to complete a certain kind of maneuver flight and to assess its maneuverability. A general method for inverse simulation of maneuver flight for helicopters with the flight control system online is developed in this paper. A general mathematical describing function is established to provide mathematical descriptions of different kinds of maneuvers. A comprehensive control solver based on the optimal linear quadratic regulator theory is developed to calculate the pilot controls of different maneuvers. The coupling problem between pilot controls and flight control system outputs is well solved by taking the flight control system model into the control solver. Inverse simulation of three different kinds of maneuvers with different agility requirements defined in the ADS-33E-PRF is implemented based on the developed method for a UH-60 helicopter. The results show that the method developed in this paper can solve the closed-loop inverse simulation problem of helicopter maneuver flight with high reliability as well as efficiency. Keywords: Closed-loop, Flying quality, Helicopters, Inverse simulation, Maneuver flight

  13. Enhancement of Iris Recognition System Based on Phase Only Correlation

    Directory of Open Access Journals (Sweden)

    Nuriza Pramita

    2011-08-01

    Full Text Available Iris recognition system is one of biometric based recognition/identification systems. Numerous techniques have been implemented to achieve a good recognition rate, including the ones based on Phase Only Correlation (POC. Significant and higher correlation peaks suggest that the system recognizes iris images of the same subject (person, while lower and unsignificant peaks correspond to recognition of those of difference subjects. Current POC methods have not investigated minimum iris point that can be used to achieve higher correlation peaks. This paper proposed a method that used only one-fourth of full normalized iris size to achieve higher (or at least the same recognition rate. Simulation on CASIA version 1.0 iris image database showed that averaged recognition rate of the proposed method achieved 67%, higher than that of using one-half (56% and full (53% iris point. Furthermore, all (100% POC peak values of the proposed method was higher than that of the method with full iris points.

  14. Intra-unit correlations in seroconversion to Actinobacillus pleuropneumoniae and Mycoplasma hyopneumoniae at different levels in Danish multi-site pig production facilities

    DEFF Research Database (Denmark)

    Vigre, Håkan; Dohoo, I.R.; Stryhn, H.

    2004-01-01

    2) and Mycoplasma hyopneumoniae (Mh). Based on the estimated variances, three newly described computational methods (model linearisation, simulation and linear modelling) and the standard method (latent-variable approach) were used to estimate the correlations (intra-class correlation components...

  15. A method for data handling numerical results in parallel OpenFOAM simulations

    International Nuclear Information System (INIS)

    nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian

    2015-01-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms

  16. A method for data handling numerical results in parallel OpenFOAM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  17. Performance prediction for silicon photonics integrated circuits with layout-dependent correlated manufacturing variability.

    Science.gov (United States)

    Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas

    2017-05-01

    This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.

  18. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  19. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  20. Estimation of Kubo number and correlation length of fluctuating magnetic fields and pressure in BOUT + + edge pedestal collapse simulation

    Science.gov (United States)

    Kim, Jaewook; Lee, W.-J.; Jhang, Hogun; Kaang, H. H.; Ghim, Y.-C.

    2017-10-01

    Stochastic magnetic fields are thought to be as one of the possible mechanisms for anomalous transport of density, momentum and heat across the magnetic field lines. Kubo number and Chirikov parameter are quantifications of the stochasticity, and previous studies show that perpendicular transport strongly depends on the magnetic Kubo number (MKN). If MKN is smaller than one, diffusion process will follow Rechester-Rosenbluth model; whereas if it is larger than one, percolation theory dominates the diffusion process. Thus, estimation of Kubo number plays an important role to understand diffusion process caused by stochastic magnetic fields. However, spatially localized experimental measurement of fluctuating magnetic fields in a tokamak is difficult, and we attempt to estimate MKNs using BOUT + + simulation data with pedestal collapse. In addition, we calculate correlation length of fluctuating pressures and Chirikov parameters to investigate variation correlation lengths in the simulation. We, then, discuss how one may experimentally estimate MKNs.

  1. The multiphonon method as a dynamical approach to octupole correlations in deformed nuclei

    International Nuclear Information System (INIS)

    Piepenbring, R.

    1986-09-01

    The octupole correlations in nuclei are studied within the framework of the multiphonon method which is mainly the exact diagonalization of the total Hamiltonian in the space spanned by collective phonons. This treatment takes properly into account the Pauli principle. It is a microscopic approach based on a reflection symmetry of the potential. The spectroscopic properties of double even and odd-mass nuclei are nicely reproduced. The multiphonon method appears as a dynamical approach to octupole correlations in nuclei which can be compared to other models based on stable octupole deformation. 66 refs

  2. Microcanonical simulations in classical and quantum field theory

    International Nuclear Information System (INIS)

    Olson, D.P.

    1988-01-01

    In the first part of this thesis, a stochastic adaptation of the microcanonical simulation method is applied to the numerical simulation of the Su-Schrieffer-Heeger Hamiltonian for polyacetylene, a one-dimensional polymer were fermion-boson interactions play a dominant role in the dynamics of the system. The pure microcanonical simulation method fails in the marginally ergodic case and a stochastic adaptation, the hybrid microcanonical method, is employed to resolve problems with ergodicity. The hybrid method is shown to be an efficient method for higher dimensional fermionic quantum systems. In the second part of this thesis, a numerical simulation of the evolution of a network of global cosmic strings is an expanding Robertson-Walker universe is carried out. The system is quenched through an order-disorder phase transition and the nature of the string distribution is examined. While the string distribution observed at the phase transition is in good agreement with earlier estimates, the simulation reveals that the dynamics of the strings are suppressed by interactions with the Goldstone field. The network decays by topological annihilation and no spatial correlations are observed at any point in the simulation

  3. Simulations of the near-wall heat transfer at medium prandtl numbers

    International Nuclear Information System (INIS)

    Bergant, R.; Tiselj, I.

    2003-01-01

    A heat transfer from a wall to a fluid at low Reynolds and Prandtl numbers can be described by means of Direct Numerical Simulation (DNS). At higher Prandtl numbers (Pr > 20) so-called under-resolved DNS can be performed to carry out turbulent heat transfer. Three different under-resolved DNSs of the fully developed turbulent flow in the channel at Reynolds number Re = 4580 and at Prandtl numbers Pr = 100, Pr = 200 and Pr 500 are presented in this paper. These simulations describe all velocity scales, but they are not capable to describe smallest temperature scales. However, very good agreement of heat transfer coefficients was achieved with the correlation of Hasegawa [1] or with the correlation of Papavassiliou [2], who performed DNS by means of Lagrangian method instead of Eulerian method, which was applied in our simulations. We estimate that under resolved DNS simulations based on Eulerian method are useful up to approximately Pr = 200, whereas at Pr = 500 instabilities appear due to the unresolved smallest thermal scales. (author)

  4. Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments

    Science.gov (United States)

    Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162

  5. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  6. Wind turbine rotor blade monitoring using digital image correlation: a comparison to aeroelastic simulations of a multi-megawatt wind turbine

    International Nuclear Information System (INIS)

    Winstroth, J; Ernst, B; Seume, J R; Schoen, L

    2014-01-01

    Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-ofplane blade deflections shows good agreement between DIC results and aeroelastic simulations

  7. Wind turbine rotor blade monitoring using digital image correlation: a comparison to aeroelastic simulations of a multi-megawatt wind turbine

    Science.gov (United States)

    Winstroth, J.; Schoen, L.; Ernst, B.; Seume, J. R.

    2014-06-01

    Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-ofplane blade deflections shows good agreement between DIC results and aeroelastic simulations.

  8. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    Science.gov (United States)

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  9. Mathematical correlation of modal parameter identification methods via system realization theory

    Science.gov (United States)

    Juang, J. N.

    1986-01-01

    A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.

  10. Dynamic induced softening in frictional granular materials investigated by discrete-element-method simulation

    Science.gov (United States)

    Lemrich, Laure; Carmeliet, Jan; Johnson, Paul A.; Guyer, Robert; Jia, Xiaoping

    2017-12-01

    A granular system composed of frictional glass beads is simulated using the discrete element method. The intergrain forces are based on the Hertz contact law in the normal direction with frictional tangential force. The damping due to collision is also accounted for. Systems are loaded at various stresses and their quasistatic elastic moduli are characterized. Each system is subjected to an extensive dynamic testing protocol by measuring the resonant response to a broad range of ac drive amplitudes and frequencies via a set of diagnostic strains. The system, linear at small ac drive amplitudes, has resonance frequencies that shift downward (i.e., modulus softening) with increased ac drive amplitude. Detailed testing shows that the slipping contact ratio does not contribute significantly to this dynamic modulus softening, but the coordination number is strongly correlated to this reduction. This suggests that the softening arises from the extended structural change via break and remake of contacts during the rearrangement of bead positions driven by the ac amplitude.

  11. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  12. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  13. Windowed Multitaper Correlation Analysis of Multimodal Brain Monitoring Parameters

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  14. Windowed multitaper correlation analysis of multimodal brain monitoring parameters.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Although multimodal monitoring sets the standard in daily practice of neurocritical care, problem-oriented analysis tools to interpret the huge amount of data are lacking. Recently a mathematical model was presented that simulates the cerebral perfusion and oxygen supply in case of a severe head trauma, predicting the appearance of distinct correlations between arterial blood pressure and intracranial pressure. In this study we present a set of mathematical tools that reliably detect the predicted correlations in data recorded at a neurocritical care unit. The time resolved correlations will be identified by a windowing technique combined with Fourier-based coherence calculations. The phasing of the data is detected by means of Hilbert phase difference within the above mentioned windows. A statistical testing method is introduced that allows tuning the parameters of the windowing method in such a way that a predefined accuracy is reached. With this method the data of fifteen patients were examined in which we found the predicted correlation in each patient. Additionally it could be shown that the occurrence of a distinct correlation parameter, called scp, represents a predictive value of high quality for the patients outcome.

  15. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  16. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    Science.gov (United States)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  17. High-order dynamic lattice method for seismic simulation in anisotropic media

    Science.gov (United States)

    Hu, Xiaolin; Jia, Xiaofeng

    2018-03-01

    The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.

  18. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    Energy Technology Data Exchange (ETDEWEB)

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra [Theoretical High Energy Physics and Instrumentation Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Wahyoedi, Seramika Ari; Viridi, Sparisoma, E-mail: viridi@cphys.fi.itb.ac.id [Nuclear and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  19. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    Science.gov (United States)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  20. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  1. Correlative SEM SERS for quantitative analysis of dimer nanoparticles.

    Science.gov (United States)

    Timmermans, F J; Lenferink, A T M; van Wolferen, H A G M; Otto, C

    2016-11-14

    A Raman microscope integrated with a scanning electron microscope was used to investigate plasmonic structures by correlative SEM-SERS analysis. The integrated Raman-SEM microscope combines high-resolution electron microscopy information with SERS signal enhancement from selected nanostructures with adsorbed Raman reporter molecules. Correlative analysis is performed for dimers of two gold nanospheres. Dimers were selected on the basis of SEM images from multi aggregate samples. The effect of the orientation of the dimer with respect to the polarization state of the laser light and the effect of the particle gap size on the Raman signal intensity is observed. Additionally, calculations are performed to simulate the electric near field enhancement. These simulations are based on the morphologies observed by electron microscopy. In this way the experiments are compared with the enhancement factor calculated with near field simulations and are subsequently used to quantify the SERS enhancement factor. Large differences between experimentally observed and calculated enhancement factors are regularly detected, a phenomenon caused by nanoscale differences between the real and 'simplified' simulated structures. Quantitative SERS experiments reveal the structure induced enhancement factor, ranging from ∼200 to ∼20 000, averaged over the full nanostructure surface. The results demonstrate correlative Raman-SEM microscopy for the quantitative analysis of plasmonic particles and structures, thus enabling a new analytical method in the field of SERS and plasmonics.

  2. n-particle transverse correlation and collectivity for collisions 1.2 A GeV Ar + KCl

    International Nuclear Information System (INIS)

    Liu Qingjun; Jiang Yuzhen; Wang Shan; Liu Yiming; Fung, S.Y.; Chu, S.Y.

    1993-01-01

    A method of n-particle transverse correlation function for the study of collective flow is proposed, which extends both the study of n-particle azimuthal correlations and the estimation of collectivity to the study including the magnitudes as well as the azimuthal angles for all the n-particle transverse momentum vectors. This method is more sensitive to the collectivity of collective flow than the method based on multi-particle azimuthal correlations. Using the new method, n-particle transverse correlations are analyzed for collisions of 1.2 A GeV Ar + KCl in the Bevalac streamer chamber, and the results have been compared with a Monte-Carlo simulation, which show that the collectivity for this experiment is between 85% and 95%

  3. LOMEGA: a low frequency, field implicit method for plasma simulation

    International Nuclear Information System (INIS)

    Barnes, D.C.; Kamimura, T.

    1982-04-01

    Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)

  4. 3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock

    Science.gov (United States)

    Tang, X.; Rayudu, N. M.; Singh, G.

    2017-12-01

    Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator

  5. Interactive knowledge discovery from marketing questionnarie using simulated breeding and inductive learning methods

    Energy Technology Data Exchange (ETDEWEB)

    Terano, Takao [Univ. of Tsukuba, Tokyo (Japan); Ishino, Yoko [Univ. of Tokyo (Japan)

    1996-12-31

    This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm (GA) based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. In this paper, we show a basic interactive version of the method and two variations: the one with semi-automated GA phases and the one with the relatively evaluation phase via the Analytic Hierarchy Process (AHP). The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data.

  6. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  7. Simulation of a Centrifugal Pump by Using the Harmonic Balance Method

    Directory of Open Access Journals (Sweden)

    Franco Magagnato

    2015-01-01

    Full Text Available The harmonic balance method was used for the flow simulation in a centrifugal pump. Independence studies have been done to choose proper number of harmonic modes and inlet eddy viscosity ratio value. The results from harmonic balance method show good agreements with PIV experiments and unsteady calculation results (which is based on the dual time stepping method for the predicted head and the phase-averaged velocity. A detailed analysis of the flow fields at different flow rates shows that the flow rate has an evident influence on the flow fields. At 0.6Qd, some vortices begin to appear in the impeller, and at 0.4Qd some vortices have blocked the flow passage. The flow fields at different positions at 0.6Qd and 0.4Qd show how the complicated flow phenomena are forming, developing, and even disappearing. The harmonic balance method can be used for the flow simulation in pumps, showing the same accuracy as unsteady methods, but is considerably faster.

  8. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...

  9. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    Science.gov (United States)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  10. Correlation between Dental Maturity by Demirjian Method and Skeletal Maturity by Cervical Vertebral Maturity Method using Panoramic Radiograph and Lateral Cephalogram

    Directory of Open Access Journals (Sweden)

    Madhusudhanan Mallika Mini

    2017-01-01

    Full Text Available Introduction: Radiographs are effective tools in assessing the stages of bone maturation in dentistry. The cervical vertebral maturation method is a proven effective tool in assessing the adolescent growth spurt than hand-wrist radiographs in an individual. Assessment of dental calcification stages are a reliable method for determining dental maturity. Panoramic imaging can be used as the primary imaging modality for assessing maturity if a correlation can be found out between tooth calcification stages and cervical vertebral maturation stages. This study was conducted to determine the correlation between dental maturity stage and cervical vertebral maturity stage and to estimate predictor variables for cervical vertebral maturation stages (CVMS stratified by gender in a tertiary hospital setting. Materials and Methods: A descriptive study was conducted among patients accessing orthodontic care in radiology outpatient clinic, Oral Medicine and Radiology department, Government Dental College Thiruvananthapuram for a period of 15 months. Participants were selected between the ages of 8 and 16 years. Panoramic radiographs and lateral cephalograms were used to determine dental maturity stages using Demirjian method and CVMS using Bacetti and Franchi method, respectively. Results: One hundred patients (males = 46, females = 54 were included in the study; the spearman rank order correlation revealed significant relationship. The correlation ranged from 0.61 to 0.74 for females and 0.48 to 0.51 for males. Second premolar showed highest correlation and canine the lowest for both females and males. Stage G of mandibular second premolar signifies the pubertal growth period in this study population. By ordinal regression model, G stage of second premolar was found to be a significant predictor in males and stage H followed by G and F in females for the age group of 12–14 years. Conclusion: Dental maturation stages were significantly correlated with CVMS

  11. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  12. Training simulator for nuclear power plant reactor control model and method

    International Nuclear Information System (INIS)

    Czerbuejewski, F.R.

    1975-01-01

    A description is given of a method and system for the real-time dynamic simulation of a nuclear power plant for training purposes, wherein a control console has a plurality of manual and automatic remote control devices for operating simulated control rods and has indicating devices for monitoring the physical operation of a simulated reactor. Digital computer means are connected to the control console to calculate data values for operating the monitoring devices in accordance with the control devices. The simulation of the reactor control rod mechanism is disclosed whereby the digital computer means operates the rod position monitoring devices in a real-time that is a fraction of the computer time steps and simulates the quick response of a control rod remote control lever together with the delayed response upon a change of direction

  13. 2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method

    Science.gov (United States)

    Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)

    2000-01-01

    The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.

  14. Empirical source strength correlations for rans-based acoustic analogy methods

    Science.gov (United States)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate

  15. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    Science.gov (United States)

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  16. Structure function analysis of long-range correlations in plasma turbulence

    International Nuclear Information System (INIS)

    Yu, C.X.; Gilmore, M.; Peebles, W.A.; Rhodes, T.L.

    2003-01-01

    Long-range correlations (temporal and spatial) have been predicted in a number of different turbulence models, both analytical and numerical. These long-range correlations are thought to significantly affect cross-field turbulent transport in magnetically confined plasmas. The Hurst exponent, H - one of a number of methods to identify the existence of long-range correlations in experimental data - can be used to quantify self-similarity scalings and correlations in the mesoscale temporal range. The Hurst exponent can be calculated by several different algorithms, each of which has particular advantages and disadvantages. One method for calculating H is via structure functions (SFs). The SF method is a robust technique for determining H with several inherent advantages that has not yet been widely used in plasma turbulence research. In this article, the SF method and its advantages are discussed in detail, using both simulated and measured fluctuation data from the DIII-D tokamak [J. L. Luxon and L. G. Davis, Fusion Technol. 8, 441 (1985)]. In addition, it is shown that SFs used in conjunction with rescaled range analysis (another method for calculating H) can be used to mitigate the effects of coherent modes in some cases

  17. Evaluation of CFD Methods for Simulation of Two-Phase Boiling Flow Phenomena in a Helical Coil Steam Generator

    Energy Technology Data Exchange (ETDEWEB)

    Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shaver, Dillon [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, Yang [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vegendla, Prasad [Argonne National Lab. (ANL), Argonne, IL (United States); Tentner, Adrian [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-30

    The U.S. Department of Energy, Office of Nuclear Energy charges participants in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program with the development of advanced modeling and simulation capabilities that can be used to address design, performance and safety challenges in the development and deployment of advanced reactor technology. The NEAMS has established a high impact problem (HIP) team to demonstrate the applicability of these tools to identification and mitigation of sources of steam generator flow induced vibration (SGFIV). The SGFIV HIP team is working to evaluate vibration sources in an advanced helical coil steam generator using computational fluid dynamics (CFD) simulations of the turbulent primary coolant flow over the outside of the tubes and CFD simulations of the turbulent multiphase boiling secondary coolant flow inside the tubes integrated with high resolution finite element method assessments of the tubes and their associated structural supports. This report summarizes the demonstration of a methodology for the multiphase boiling flow analysis inside the helical coil steam generator tube. A helical coil steam generator configuration has been defined based on the experiments completed by Polytecnico di Milano in the SIET helical coil steam generator tube facility. Simulations of the defined problem have been completed using the Eulerian-Eulerian multi-fluid modeling capabilities of the commercial CFD code STAR-CCM+. Simulations suggest that the two phases will quickly stratify in the slightly inclined pipe of the helical coil steam generator. These results have been successfully benchmarked against both empirical correlations for pressure drop and simulations using an alternate CFD methodology, the dispersed phase mixture modeling capabilities of the open source CFD code Nek5000.

  18. Photon correlation holography.

    Science.gov (United States)

    Naik, Dinesh N; Singh, Rakesh Kumar; Ezawa, Takahiro; Miyamoto, Yoko; Takeda, Mitsuo

    2011-01-17

    Unconventional holography called photon correlation holography is proposed and experimentally demonstrated. Using photon correlation, i.e. intensity correlation or fourth order correlation of optical field, a 3-D image of the object recorded in a hologram is reconstructed stochastically with illumination through a random phase screen. Two different schemes for realizing photon correlation holography are examined by numerical simulations, and the experiment was performed for one of the reconstruction schemes suitable for the experimental proof of the principle. The technique of photon correlation holography provides a new insight into how the information is embedded in the spatial as well as temporal correlation of photons in the stochastic pseudo thermal light.

  19. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  20. Numerical Simulation of Shear Slitting Process of Grain Oriented Silicon Steel using SPH Method

    Directory of Open Access Journals (Sweden)

    Bohdal Łukasz

    2017-12-01

    Full Text Available Mechanical cutting allows separating of sheet material at low cost and therefore remains the most popular way to produce laminations for electrical machines and transformers. However, recent investigations revealed the deteriorating effect of cutting on the magnetic properties of the material close to the cut edge. The deformations generate elastic stresses in zones adjacent to the area of plastically deformed and strongly affect the magnetic properties. The knowledge about residual stresses is necessary in designing the process. This paper presents the new apprach of modeling residual stresses induced in shear slitting of grain oriented electrical steel using mesh-free method. The applications of SPH (Smoothed Particle Hydrodynamics methodology to the simulation and analysis of 3D shear slitting process is presented. In experimental studies, an advanced vision-based technology based on digital image correlation (DIC for monitoring the cutting process is used.